Normal view

Today — 22 February 2026English

Psychological capital mitigates the impact of interpersonal sensitivity on anxiety in future nurses

22 February 2026 at 03:00

A new study published in BMC Psychology has found that those who are highly sensitive to others’ reactions are more likely to experience anxiety, and that their levels of inner psychological resources play a key role in this link.

Even before nursing students begin their careers in hospitals, many experience anxiety tied to academic workloads, clinical placements, and the emotional weight of caring for patients.

Researchers have long known that stress and anxiety are common in nursing programs, but this new study sheds light on why some students may be more vulnerable than others.

One factor the researchers examined is interpersonal sensitivity, which refers to being unusually alert to how others behave, speak, or react. People high in interpersonal sensitivity often worry about being judged, criticized, or rejected. While this trait has been studied in relation to depression, its connection to anxiety—especially among nursing students—has received far less attention.

To address this gap, the research team – led by Yanyan Mi (Xuzhou Medical University) and Zhen Wang (Taishan Vocational College of Nursing) – surveyed 1,815 nursing undergraduates (1,511 females) at a university in eastern China.

Students completed questionnaires measuring anxiety symptoms, interpersonal sensitivity, perceived social support, and psychological capital. Perceived social support refers to an individual’s subjective feeling of being supported by intimate relationships with family, friends, or significant others such as teachers, classmates, and relatives. Psychological capital is a positive mental state that includes a person’s sense of hope, resilience, optimism, and confidence in their ability to handle challenges.

The results revealed that students who scored higher in interpersonal sensitivity were much more likely to report anxiety symptoms. Importantly, the researchers found that psychological capital played a powerful mediating role. Students who were highly sensitive to others tended to have lower levels of psychological capital, which in turn made them more prone to anxiety. In other words, when students lacked inner psychological resources, such as confidence or resilience, their sensitivity to social interactions had a stronger emotional impact.

Social support also played a role, though the mechanics were slightly different. While the study confirmed that social support can independently buffer the relationship between interpersonal sensitivity and anxiety, it was not the primary driver in the combined chain model. Instead, social support contributed most effectively when combined with psychological capital in a chain effect. This suggests that supportive relationships from others help build internal psychological resources, which then protect against anxiety.

The findings highlight an important message for nursing programs: building students’ psychological capital may be one of the most effective ways to reduce anxiety, especially for those who are highly sensitive in social situations. Interventions such as resilience training, group counseling, and mentorship programs could help students develop stronger internal coping skills.

“If nursing students leave nursing profession positions, the shortage of nurses will continue to expand, the quality of nursing will be affected in the future, the relationship between nurses and patients will be more tense, and the safety of patients will be threatened. … As such, there is a crucial need to prioritize the mental health of undergraduate nursing students,” Mi and Wang’s team emphasized.

However, the authors note several limitations. For instance, the study has a cross‑sectional design, which cannot definitively prove cause and effect, and all participants were sourced from a single university.

The study, “Exploring the impact of interpersonal sensitivity on anxiety symptoms: the mediating role of psychological capital and social support among nursing students,” was authored by Yanyan Mi, Zhen Wang, Lixin Peng, Chaoran Zhang, and Haibo Xu.

Men and women tend to read sexual assault victims’ emotions differently, study finds

A new study published in Evolution & Human Behavior finds that men tend to underestimate how upset women would feel after sexual assault by an intimate partner, while women tend to overestimate how upset men would feel.

Theory of mind is often treated as a general cognitive skill. However, evolutionary perspectives suggest that mind-reading may be partly domain-specific, especially in areas where men and women have historically faced different adaptive challenges. One such domain is sexual violence, where women have disproportionately experienced victimization and its associated physical, reproductive, and psychological costs.

Dr. Rebecka K. Hahnel-Peeters, an assistant professor in the Department of Psychology at Indiana State University, explained, “It’s hard to narrow down what drew me to this topic, but three primary influences stand out: (1) a larger lab interest in the domain-specificity of theory of mind, (2) my interest in replication efforts, and (3) the topic’s connection to my broader research program on sexual victimization and threat management.”

Hahnel-Peeters described how the project emerged from a collaborative effort. “This paper began as part of a larger lab discussion about whether theory of mind was domain-specific in its content. Theory of mind is our ability to infer others’ knowledge, thoughts, emotions, and desires—and importantly, to recognize that these mental states may differ from our own. Previous work has explored when perceptual errors in inferences about the opposite-sex’s mental states might be favored by selection (e.g., Haselton & Buss, 2000).”

“My colleagues, William Costello and Paola Baca, were equally interested in perceptual errors and biases in cross-sex theory of mind, particularly in judgments of sexual desire. We decided a group project was in order, each selecting a domain within mating psychology to explore. This naturally led me to consider opportunities for replication.”

The replication focus was central, the author shared, “Our mentor and co-author, Dr. David Buss, previously documented that men tend to underestimate how upsetting sexual aggression perpetrated by a romantic partner is to the average woman (Buss, 1989). Women were also inaccurate in estimating men’s reactions to such aggression. Those data were collected in the late 1980s. Given how much time has passed—and considering the increased awareness surrounding sexual violence following the 2017 #MeToo movement—I was curious whether these effects would replicate today.”

“If the misperception replicated, there were important implications for how we prosecute sexual violence. Much prosecution of sexual violence relies on the reasonable person standard — if the acts committed by the alleged perpetrator would cause a ‘reasonable person’ fear. This assumes men and women have a shared baseline of emotional responses to sexually threatening behavior. If a ‘reasonable man’ consistently differs from a ‘reasonable woman,’ then our current legal standards may be mismatched to the reality of sexual violence.”

The researchers recruited participants through social media and the participant pool at The University of Texas at Austin. The final sample included 781 participants, 61% of whom were female, ranging in age from 18-67 years. The design required participants to rate their own reactions, as well as the reactions of the “average man” and “average woman,” across multiple domains.

To measure emotional upset, participants rated 150 behaviors originally drawn from Buss (1989), including four items reflecting sexual aggression by an intimate partner, such as being forced to have sex. They also reported their fear of rape and other crimes, along with their perceived likelihood of being victimized by crimes such as sexual assault and sexual harassment.

In addition, participants completed measures assessing sociosexual orientation, Dark Triad traits (Machiavellianism, narcissism, psychopathy), empathy, self-perceived mate value, and perceived formidability. These individual difference measures allowed the authors to test whether cross-sex errors reflected adaptive inferential biases or instead were explained by projection of one’s own reactions.

“American men and women tend to systematically misperceive eachothers’ emotional upset in the domain of sexual violence perpetrated by one’s romantic partner, and this misperception is surprisingly stable across time,” Hahnel-Peeters told PsyPost.

“Although it’s true that both men and women accurately reported [that] the average woman experiences more upset, men significantly underestimated the upset that women actually reported. This has important implications for interpersonal communication, empathy, and legal contexts. We hope educating about these biases may improve education, prevention efforts, and how we define a ‘reasonable person’ in sexual violence cases.”

In contrast, men were relatively accurate in estimating women’s fear of rape and perceived likelihood of sexual victimization, while women overestimated men’s perceived likelihood of victimization.

Analyses of individual differences revealed that most cross-sex errors were strongly predicted by participants’ own self-ratings. For example, men’s underestimation of women’s upset was primarily explained by how upset the men themselves reported they would feel. Dark Triad traits and sociosexual orientation did not robustly predict these errors in the upset domain, lending more support to the byproduct (egocentric bias) hypothesis than to a specialized adaptive bias account, although some personality traits (e.g., psychopathy, Machiavellianism) were linked to errors in perceived likelihood and fear.

When asked if there are any caveats, the author emphasized “Absolutely, as with any study.”

“Although we replicated key patterns found in the late 1980s, these data are confined to the contexts of undergraduate students in the United States. Our data cannot definitively determine if these misperceptions result from evolved design features or byproducts of some other cognitive system. We tested several theoretically relevant individual differences, but future research would benefit from more diverse participants and methodology to better evaluate the adaptive vs. byproduct hypotheses.”

Hahnel-Peeters also highlighted unanswered questions. “Future research could examine cross-sex mindreading across cultures, such as those differing in sexuality or gender norms. To further test predictions about how reproductive status may calibrate theory of mind in this domain, future research should include a greater age range — specifically sampling for post-menopausal women. Another question includes how jurors’ individual differences in the magnitude of error influences perceived victim and perpetrator culpability. We’re only just beginning to understand these misperceptions.”

Taken together, the findings suggest that cross-sex misunderstandings in the domain of sexual violence are systematic, stable across decades, and shaped in part by individuals’ own emotional baselines.

“This project highlights that even well-meaning individuals can hold inaccurate beliefs about the opposite sex’s experiences with sexual violence. These gaps in perceptions hold real consequences. A better understanding of these perceptual errors support better policy-making, prevention efforts, and support for victims.”

The research “Cross-sex theory of mind in the domain of sexual violence: upset, fear, and perceived likelihood” was authored by Rebecka K. Hahnel-Peeters, William Costello, Paola Baca, David P. Schmitt, and David M. Buss.

Researchers discovered a surprising link between ignored hostility and crime

21 February 2026 at 23:00

A recent study published in the journal Deviant Behavior reveals that people who endure negative treatment are more likely to express an intention to commit future crimes, even when they do not consciously recognize their mistreatment. Independent observers can identify this unseen adversity, showing that hidden emotional burdens can shape human actions. These unacknowledged experiences carry weight, altering behavior beneath the surface of conscious thought.

Criminologists study how hardship influences human behavior to better understand the root causes of crime. According to a prominent framework called general strain theory, experiencing aversive events causes negative emotions. These emotions, particularly anger, can prompt individuals to engage in rule-breaking or illegal activities as a way to cope with their distress.

In this theoretical context, a strain is simply a negative experience, such as being treated poorly by others or failing to achieve a personal goal. Historically, researchers measure this hardship by asking individuals to report the negative events in their own lives. This self-reported measurement captures what academics call perceived strain, representing the individual’s own understanding of their reality.

Relying entirely on self-reporting presents a specific challenge in behavioral research. Individuals do not always recognize or admit to the negative treatment they endure in their daily lives. A person might actively downplay a traumatic event because it is too painful to confront directly, altering their perception to protect their own self-image.

This intentional minimization is known as a controlled process. A victim might convince themselves that a hostile interaction was simply a misunderstanding, reducing the importance of the event to preserve their peace of mind. By altering how they evaluate the outcome, the individual avoids the immediate pain of the experience.

In other cases, individuals might process the negative social information automatically, completely outside of their conscious awareness. Because the human brain receives a vast amount of sensory information at any given moment, people selectively attend to certain details while ignoring others. This means that a person might be the victim of hostility but fail to consciously register the attack as it happens.

Psychological research indicates that human memory and perception often involve implicit processes that operate below the threshold of awareness. A person can have a subliminally triggered emotional reaction that drives their judgment without any accompanying feelings. The hostile stimuli still enter the brain, but the mind does not translate that input into a recognized emotional state.

Because of these cognitive blind spots, self-reported surveys might miss a vast amount of hardship. Shelley Keith, a criminologist at the University of Memphis, wanted to capture these hidden experiences to see how they impact behavior. Keith and her colleague Heather L. Scheuerman sought to understand if an independent observer could identify negative treatment that a victim overlooks.

To investigate this dynamic, the researchers analyzed data from a specialized judicial initiative known as the restorative justice program in Australia. This program brought together offenders, victims, and community members to discuss the harm caused by specific crimes. The meetings were designed to repair relationships and help the offender make amends through open dialogue.

Despite the positive goals of the program, the discussions could also expose the offender to high levels of public stigma. The emotional weight of facing a victim and the broader community can result in negative treatment and social rejection. This intense environment provided a unique setting to evaluate different perceptions of hardship and social friction.

The study included 385 offenders who had committed offenses such as shoplifting, property crimes, or driving under the influence of alcohol. Trained staff members attended these meetings and silently evaluated how the offenders were treated by the group. These observers attempted to blend in and watch the proceedings from unobtrusive vantage points to avoid disrupting the process.

The independent observers rated the level of respect, forgiveness, and hostility directed at the offender. They noted whether the group treated the individual like an irredeemable criminal or made it clear that the offender could move past their mistakes. Because these observers were completely impartial, their ratings formed a reliable measure of observed strain.

Following the meetings, the offenders completed their own structured interviews regarding the same social interactions. They rated the exact same aspects of their treatment to establish a measure of perceived strain. This dual approach allowed the researchers to directly compare what the offender felt with what the neutral third party witnessed in the room.

The offenders also answered questions about their current emotional state, specifically focusing on how angry or bitter they felt after the meeting. Finally, they reported their projected offending, which serves as a metric for future intentions. Projected offending is a self-assessed measure of how likely the individual is to obey or break the law in the coming weeks and months.

When analyzing the data, Keith and her team discovered a split between the two types of measurement. As expected, when offenders personally perceived their treatment as negative, they reported higher levels of anger. This anger then acted as a psychological bridge, increasing the likelihood that the offender would project a return to criminal behavior.

The independent observations told a completely different story. The hardship recorded by the third-party observers did not predict whether the offender would report feeling angry. Offenders did not consciously register the anger associated with the negative treatment seen by the impartial staff members.

Despite this lack of conscious anger, the observed negative treatment still increased the offender’s projected likelihood of breaking the law. The external observations predicted future rule-breaking behavior independently of the offender’s own self-reported feelings. This suggests that individuals can be influenced by negative social interactions even when they do not consciously process the hostility.

The emotional toll of the event might operate beneath the surface, driving behavioral changes without triggering recognizable feelings of anger. People might suppress their emotions or simply lack the emotional awareness to accurately identify their own frustration. This unacknowledged psychological burden can impair decision-making and lead to deviant actions, such as substance abuse or physical aggression.

When individuals fail to attend to their emotions, they often experience increased cognitive load. This mental strain limits their ability to process information and make rational decisions, making aggressive responses more likely. In essence, the unacknowledged trauma demands an outlet, manifesting as antisocial behavior even when the person claims to feel fine.

While the study provides a new way to look at behavioral triggers, it does have certain limitations. The researchers relied on the participants’ stated intentions to commit future crimes rather than tracking their actual legal infractions over time. Intentions often correlate with real actions, but observing actual behavior would provide a stronger test of the underlying theory.

The study also evaluated individuals at a single point in time, which makes it difficult to definitively prove a cause and effect relationship. Additionally, the questions regarding the offenders’ emotions focused primarily on anger. Future investigations should measure other negative feelings, such as depression or anxiety, which might also influence criminal behavior.

The measurement tool used to assess strain was also somewhat limited in scope. Hardship encompasses a wide variety of experiences beyond social rejection or a lack of forgiveness from peers. Future studies should expand these measurement tools to include other types of adversity, such as losing something of value or failing to achieve specific positive goals.

Future investigations should follow participants over a longer period to see how hidden hardships influence actual criminal records. Researchers could also incorporate physiological measurements, such as tracking heart rate or stress hormones. These biological markers could help scientists identify unconscious emotional reactions to negative events as they happen in real time.

Keith and her team suggest conducting in-depth interviews with individuals who experience unacknowledged hardship. This qualitative approach could help clarify exactly why people minimize their trauma and how different coping mechanisms alter their path forward. Understanding these hidden processes could eventually help criminal justice professionals provide better support for individuals navigating the legal system.

If impartial observers can identify hidden distress, policymakers could deploy trained personnel to monitor high-stakes judicial settings. These independent observers could intervene to reduce the stigmatization of offenders during court proceedings or correctional meetings. By identifying unnoticed mistreatment early, these professionals could connect individuals with the support services they need to process their experiences constructively.

The study, “Is Ignorance Bliss: Examining the Association Between Observed and Perceived Strain, Anger, and Projected Offending,” was authored by Shelley Keith and Heather L. Scheuerman.

Yesterday — 21 February 2026English

A popular weight loss drug shows promise for treating alcohol addiction

21 February 2026 at 21:00

A medication currently used to treat diabetes and obesity may offer a new way to help people struggling with alcohol addiction. A recent study published in eBioMedicine found that the drug tirzepatide reduces alcohol consumption and prevents relapse behaviors in rodents. These results suggest that medications targeting the body’s metabolic hormones could eventually become an option for treating alcohol use conditions.

Alcohol addiction is a pervasive condition with limited medical treatments. Existing medications only work for some people and are not widely prescribed. This gap in care has prompted researchers to look for alternative approaches that target different systems in the body.

Recently, researchers have turned their attention to medications that mimic hormones produced in the gastrointestinal tract. These hormones naturally regulate blood sugar levels and the feeling of fullness after eating a meal. Medications like semaglutide mimic one of these hormones, called glucagon-like peptide-1.

These metabolic drugs have shown early promise in reducing alcohol intake in both animal studies and human trials. Tirzepatide is a newer medication that mimics two different gut hormones at the same time. It targets the glucagon-like peptide-1 receptor alongside another receptor for a hormone called glucose-dependent insulinotropic polypeptide.

The medication is already approved and widely used for the treatment of diabetes and obesity. Because it activates two biological pathways at once, it often produces stronger metabolic effects than single-hormone drugs. The research team wanted to know if this dual-action drug could also influence the brain circuits that drive alcohol consumption.

Christian E. Edvardsson, a researcher in pharmacology at the University of Gothenburg in Sweden, led the investigation. He collaborated with colleagues at his home institution and the Medical University of South Carolina. The team sought to systematically test how tirzepatide affects different patterns of alcohol drinking in animals.

The researchers first tested how tirzepatide affects the brain’s reward system using male mice. Alcohol normally triggers a release of dopamine, a chemical messenger in the brain that creates feelings of pleasure and reinforces habits. The team measured dopamine levels in the nucleus accumbens, a key brain region involved in motivation and reward processing.

When the mice received alcohol, their dopamine levels spiked. However, when the researchers gave the mice tirzepatide before the alcohol, this dopamine surge was mostly blocked. The drug prevented the chemical reward usually associated with alcohol consumption.

To see if this effect was direct, the researchers delivered alcohol directly into the nucleus accumbens of some mice, rather than injecting it into their bodies. Tirzepatide still blocked the dopamine release. This suggests the medication interacts directly with the brain’s reward circuitry.

The team also observed the animals’ physical behavior to see if the drug altered their preference for alcohol. They used a testing enclosure where one specific room was repeatedly paired with alcohol injections. Over time, mice usually learn to prefer spending time in the alcohol-associated room because they connect it with a rewarding feeling.

Mice treated with tirzepatide did not show a preference for the room paired with alcohol. The researchers also tested the animals after a two-week period with no alcohol or behavioral testing, reintroducing neutral smells that had previously been paired with alcohol. Tirzepatide continued to block their preference for the alcohol-associated environment and its specific smells.

Next, the researchers examined voluntary drinking habits in both male and female rats. They used an intermittent access model, which provides the animals with alcohol every other day to encourage heavier drinking. A single dose of tirzepatide cut the animals’ alcohol consumption by more than half, and it also decreased their overall preference for alcohol compared to plain water.

The team then set up a different experiment to simulate binge drinking in mice. They gave the mice short, concentrated periods of access to alcohol during their most active hours in the dark. Tirzepatide effectively reduced this intensive drinking behavior in both male and female mice.

To study relapse, the researchers temporarily took alcohol away from rats that had grown accustomed to drinking it. Normally, this forced abstinence causes animals to drink much more than usual once the alcohol is returned. This temporary spike in consumption models the urge to relapse in humans.

When the researchers administered tirzepatide before returning the alcohol, the rats did not show this spike in drinking. Instead of drinking more, their alcohol intake dropped below their original baseline levels. The drug successfully prevented the relapse-like behavior.

The researchers also wanted to know if the drug would keep working over a longer period. They gave the rats tirzepatide repeatedly over two weeks. The rats maintained their lowered alcohol intake throughout the entire period without building a tolerance to the medication.

Chronic alcohol use often causes liver damage and widespread inflammation in the body. The research team analyzed the tissues of the rats after the two-week drinking period. They found that tirzepatide reduced liver weight and lowered fat deposits in the liver.

The medication also decreased the levels of inflammatory proteins in the blood. This dual effect on both drinking behavior and metabolic health could be highly relevant for patients. Many people dealing with alcohol addiction also suffer from liver disease and metabolic issues.

Finally, the team looked closer at brain activity to understand where the drug might be exerting its effects. They measured electrical signals in various reward-related brain regions of mice. They noticed lasting changes in electrical activity within the lateral septum, an area of the brain that helps regulate emotional responses and motivation.

By analyzing the proteins in the lateral septum of alcohol-consuming rats, the researchers found changes in specific proteins called histones. Histones act like tiny spools that DNA winds around inside a cell. They help control which genes are turned on or off.

Alcohol consumption often alters these proteins, a process that changes how genes are expressed in the brain. The study suggests tirzepatide might interact with this process to alter drinking behavior. The exact mechanism connecting these protein changes to reduced alcohol intake requires more investigation.

While these results offer a promising new direction, the study has a few limitations that warrant attention. The researchers only used male animals for the experiments involving brain chemistry, electrical activity, and protein analysis. Because male and female brains can respond differently to addiction and to certain medications, future studies need to include female animals in these specific tests.

The drug also caused the animals to eat less and lose weight. While this might benefit people dealing with both alcohol addiction and obesity, it could cause unwanted weight loss in other patients. Doctors would need to monitor this side effect in clinical settings.

Researchers still need to conduct human clinical trials to confirm if tirzepatide is safe and effective for treating alcohol addiction in people. “This is not yet a new treatment for alcohol use disorder. But the findings reinforce the view that drugs targeting these neural systems may be relevant to investigate further as potential treatment options,” says Elisabet Jerlhag Holm, Professor of Pharmacology at the Sahlgrenska Academy, University of Gothenburg.

The study, “Tirzepatide reduces alcohol drinking and relapse-like behaviours in rodents,” was authored by Christian E. Edvardsson, Louise Adermark, Sam Gottlieb, Safana Alfreji, Thaynnam A. Emous, Yomna Gouda, et al.

How unemployment changes the way people dream

21 February 2026 at 19:00

A recent analysis of thousands of social media posts reveals that losing a job alters the narrative landscape of a person’s dreams, stripping away elements of surprise and visual perception while increasing work-related themes. These changes suggest that the mental disengagement people experience during unemployment seeps directly into their sleeping minds, offering employers and researchers a new way to understand workforce well-being. The study was published in the journal Dreaming.

Researchers often rely on the continuity hypothesis to understand nighttime narratives. This concept suggests that a person’s dreams act as a direct extension of their waking life. Sleepers do not simply replay every waking event like a video recording.

Instead, they dream about the thoughts, emotional states, and central concerns that hold the most personal meaning to them. Because careers shape a person’s daily routine and sense of identity, work-related themes appear frequently in sleep. Prior research shows that job-related stress directly correlates with distressing dream content.

High-stress environments often lead to work-related nightmares, which can then increase daytime stress in a looping cycle. Job loss represents a profound disruption to a person’s economic stability and psychological well-being. Meaningful work provides financial resources, a sense of purpose, and societal recognition.

Losing a position can trigger an identity crisis, leading to diminished self-worth, social withdrawal, and feelings of alienation. People struggling with job loss often hesitate to share their experiences due to the stigma attached to being out of work. This reluctance makes it difficult for psychologists to fully measure the emotional toll using traditional self-reported surveys.

Dream narratives offer an indirect window into these unvoiced psychological challenges. Emily Cook, a researcher at the Center for Organizational Dreaming, led an investigation to explore this hidden emotional landscape. She and her co-author, Kyle Napierkowski, wanted to see if specific thematic differences emerge in the dreams of people without jobs compared to those with steady employment.

They suspected that analyzing large collections of online dream diaries could reveal nuanced cognitive patterns that traditional questionnaires miss. To gather a massive sample of narratives, the research team turned to Reddit. This social networking forum allows anonymous users to form communities based on specific interests or shared experiences.

The researchers collected data from a community dedicated entirely to sharing and discussing dreams. To identify participants who were likely out of work, the team looked for users who also participated in communities focused on job loss, recruiting struggles, and career guidance. They gathered dream posts written by these users in the six months before they joined the unemployment-focused groups.

This specific timeline helped capture the mindset of individuals just before or during their transition out of the workforce. The researchers then built a control group of users who posted in the dream community but never interacted with the career-focused forums. They matched the dates of the control posts to the target group to eliminate any seasonal or time-related biases.

After manually filtering out posts that did not contain actual dream descriptions, the team had a dataset of 6,478 reports split evenly between the two groups. To analyze this massive amount of text, the team used a large language model. This type of artificial intelligence processes human language by converting words and sentences into mathematical representations.

This conversion allows computers to identify semantic patterns across thousands of documents in a fraction of the time it would take a human reader. The researchers also used a statistical technique called principal component analysis. High-dimensional data can slow down computer models and obscure important patterns.

This specific analysis method reduces the complexity of massive datasets, highlighting the most important variations without losing the underlying meaning of the text. The team tested multiple machine learning algorithms to classify the reports as belonging to either the target group or the control group. A logistic regression model, which calculates the probability of a specific outcome based on various input factors, performed the best.

The researchers then isolated the highest and lowest scoring dreams to identify the exact words and themes driving the mathematical differences. The computer models revealed distinctions between the two sets of narratives. The most prominent difference was an overrepresentation of professional and work-related words in the dreams of the target group.

People facing job loss dreamed heavily about workplaces, college, and professional stakes. This finding aligns directly with the idea that dreams reflect waking concerns. Because unemployed individuals experience high stress levels linked to joblessness, work becomes a more intense concern in their daily lives.

The distress associated with active job seeking fuels this heightened prevalence of work-related dreams. The models also detected a noticeable lack of specific elements in the target group. Words indicating surprise, such as feeling shocked or noticing sudden changes, appeared less often in their reports.

The researchers note that to experience surprise, a person must actively process new information and compare it to their expectations. The absence of surprise suggests a more passive cognitive style during sleep. Similarly, the target group used fewer words related to visual observations.

They were less likely to describe the act of seeing, looking at, or observing their dream environment. The researchers interpret this lack of visual and emotional engagement as a sleeping reflection of workforce disengagement. In the business world, human resources professionals measure employee engagement to understand a worker’s enthusiasm and involvement.

An engaged employee works with passion, while an unengaged employee participates without energy or commitment. Engagement often drops right before a person leaves a job, whether through resignation or involuntary termination. The study suggests that this waking disengagement extends deeply into the structure of a person’s dreams.

The result is a less active and less observant nighttime experience. By identifying these systematic differences, the researchers suggest a possible extension of existing psychological theories. Major life circumstances shape not just what people dream about, but how they experience their dream environment.

Disengagement from waking life translates into disengagement from the dream world. While the digital approach allowed for a massive sample size, the researchers noted a few limitations. Anonymous social media users do not share every dream they have.

People tend to post about their most emotionally intense nighttime experiences, which could skew the data toward dramatic narratives. Additionally, employment status was inferred entirely from forum participation. A person posting in an unemployment group might have already found a new job.

Alternatively, a person in the control group might be unemployed but simply chose not to use those specific forums. This potential misclassification introduces some error into the analysis. Future investigations could pair online data collection with targeted surveys to confirm a user’s actual work history.

Gathering direct information from users would help validate the anonymous data. Tracking individuals over time would also help researchers understand how sleep narratives evolve through the distinct phases of losing a job. The psychological experiences of the initial awareness of job loss, the actual search for work, and eventual reemployment likely influence dream content in different ways.

A longitudinal approach could reveal the hidden timeline of these stressors. The team also hopes to explore whether shifting dream themes can predict upcoming job loss before the worker consciously realizes their position is in danger. Subjective well-being often declines months before an actual termination.

Tracking these subtle narrative shifts could detect changes in emotional states before they manifest as behavioral issues at work. This predictive capability could eventually provide large organizations with an anonymous, non-intrusive metric to monitor overall workforce engagement. Gathering aggregated dream trends might offer human resources departments an early warning system for widespread burnout, allowing companies to address engagement challenges before mass resignations occur.

The study, “The Impact of Unemployment on Dream Content,” was authored by Emily Cook and Kyle Napierkowski.

Girls rarely experience the “friend zone,” psychology study finds

21 February 2026 at 17:00

A new study published in Evolution and Human Behavior provides evidence that the tendency for young men to mistake friendliness for sexual interest strengthens gradually throughout their teenage years. The research also suggests that when adolescent girls express romantic interest, boys rarely dismiss it as mere friendliness. Together, these findings help explain how romantic misunderstandings develop during adolescence and mirror the dynamics often seen in heterosexual adults.

In evolutionary psychology, a framework called Error Management Theory proposes that adults have built up specific biases to handle the uncertainty of dating. This theory suggests that men tend to overperceive sexual interest so they do not miss out on rare mating opportunities.

Failing to notice a sexual opportunity carries a high reproductive cost for men. On the other hand, women tend to underperceive sexual interest. This underperception bias helps them gently brush off unwanted suitors without causing conflict and protects their social reputation from rumors.

While these patterns are well documented in adults, scientists did not know at what age these psychological adaptations activate. Because adolescents experience puberty and possess reproductive capabilities, they face many of the same social and biological challenges as adults. The researchers wanted to test whether these misperception biases are already functioning by age 16.

They also wanted to track how these psychological patterns change as teenagers mature into young adults at age 19. If these biases appear too early, they could interfere with normal socializing and play. If they appear too late, adolescents might miss out on important social and romantic learning experiences.

“Imagine you’re having a friendly conversation with someone you secretly have a crush on. You naturally hope they’re talking to you not just out of friendliness, but because they might feel something more. But you can’t know for sure—you have to infer their intentions. Are you their crush? Or just their friend?” said Marius Stavang, a PhD student at the Norwegian University of Science and Technology and member of the the Sexual Conflict Research Group.

“This kind of romantic uncertainty is something most people experience at some point. Research on adults shows that men and women tend to make predictable inferential errors in these situations: men often overestimate women’s romantic or sexual interest, while women tend to underestimate men’s interest. However, we didn’t know when these patterns begin to emerge.”

“Do these sex-typical misperception biases already exist in early adolescence, or do they develop later? That was the key gap we wanted to address. Understanding how these misperceptions develop matters because they can lead to awkwardness, disappointment, and in more serious cases, sexual coercion.”

To explore these questions, the scientists analyzed data from 1,290 heterosexual high school students in Norway. The sample included 551 males and 739 females between the ages of 16 and 19. The data was originally gathered as part of the 2013 Health, Sexual Harassment, and Experiences Study in the city of Trondheim.

Students completed surveys in private cubicles or at home during regular school hours. The researchers measured sexual misperception by asking participants about their experiences over the previous 12 months. Specifically, they asked if the students had ever been just friendly to someone of the opposite sex, only to have that person mistake their friendliness for a sexual advance.

This scenario represents being sexually overperceived. They also asked if the students had ever tried to show sexual or romantic interest, only to have the other person assume they were just trying to be nice. This scenario represents being sexually underperceived.

In addition, the participants answered questions about their sociosexuality, which is a person’s willingness to engage in casual sex without a committed relationship. The survey also asked students to rate their own mate value. In evolutionary biology, mate value refers to a person’s overall attractiveness and desirability as a romantic or sexual partner.

The scientists found that the traditional adult pattern of misperception is not fully formed at age 16. Instead, it develops over the course of the late teen years. At age 16, only 7 percent of females reported that males mistook their friendliness for sexual interest.

By age 19, that number grew to 25 percent. Because of this steady increase, females first reported a noticeable male overperception bias at age 17. This suggests that the male tendency to read too much into friendly behavior becomes active in the middle of adolescence.

“We found that the tendency for males to overestimate females’ sexual interest is not fully established by age 16, but appears to strengthen somewhat across middle to late adolescence,” Stavang told PsyPost. “This suggests that the well-known adult pattern—where men interpret women’s friendliness as sexual interest—undergoes developmental change during the teenage years.”

The patterns for underperception looked very different. Across all ages from 16 to 19, a substantial number of males reported that their romantic interest was dismissed as just being nice. About 13 percent of boys experienced this misunderstanding in the past year.

In contrast, only 3 percent of girls reported having their romantic interest mistaken for friendliness. This indicates that the classic experience of being placed in the friend zone is exceptionally rare for teenage girls. The sex difference in underperception is already firmly in place by age 16 and stays relatively consistent through age 19.

“I was surprised that almost no adolescent girls reported that their amorous interest was discounted as just friendliness,” Stavang said. “I had expected this classic experience—often discussed among adult men as the ‘friendzone’—to be more evenly distributed during high school, when dating norms might be less firmly established.”

The researchers also noted an unexpected pattern in how boys were overperceived by girls. The number of boys who had their friendliness mistaken for sexual interest rose from 16 to 18, but then dropped sharply to just 3 percent at age 19. Because so few 19-year-old boys were overperceived, and many were still underperceived, it was at age 19 that boys formally reported a female underperception bias.

Personal traits also influenced how often teenagers were misunderstood. A higher interest in casual sex increased the risk of being overperceived for both boys and girls. For boys, this openness to casual sex also increased their chances of having their actual romantic interest ignored.

Self-perceived mate value played a significant role for boys. Males who rated themselves as highly attractive partners were much more likely to have their friendliness mistaken for sexual interest. Relationship status and whether a teenager had experienced their sexual debut did not appear to affect their risk of being misperceived.

There are a few potential misinterpretations and limitations to keep in mind regarding this study. Because the research relied on teenagers reporting their own experiences, the data might be influenced by memory errors or subjective interpretations of social events. Also, the survey did not ask for the specific ages of the people who misunderstood the participants, meaning the misperceptions could have involved older or younger peers.

The research was conducted in Norway, a country known for high gender equality and open attitudes toward teenage dating. The researchers note that cultural rules surrounding dating might cause these biases to develop differently in more conservative societies. Finally, the study only looked at chronological age rather than physical maturity, which might play a bigger role in how teenagers are perceived by others.

“In adults, speed-dating paradigms have been very useful for studying misperception,” Stavang noted. “Participants can report how interested they think the other person is in them, and that can be directly compared to how interested the other person actually reports being. This provides a relatively objective measure of misperception.”

“If similar designs could be adapted ethically and appropriately for adolescent samples, it would represent a major step forward in understanding how sexual misperception biases develop. Longitudinal designs would also be especially valuable for identifying when and why these biases strengthen.”

“Romantic misunderstandings often arise because people are not fully transparent about their feelings,” Stavang added. “Adolescents may also be learning—by observing adults—that they too should communicate ambiguously when dating.”

“Encouraging clearer and more honest communication about interest and lack of interest could potentially reduce these misperceptions. If we understand how these biases develop, we may also be better equipped to interrupt cycles of misunderstanding before they solidify into adult patterns.”

The study, “Adolescent development of sexual misperception biases: females increasingly overperceived, males consistently underperceived,” was authored by Marius Stavang, Mons Bendixen, and Leif Edward Ottesen Kennair.

The psychology of masochism: Is it a disorder or a healing mechanism?

21 February 2026 at 16:00

The concept of masochism often evokes images of whips, chains, and leather. While these elements can certainly be part of the picture, the scientific and historical reality is far more nuanced. At its core, masochism refers to the experience of finding pleasure or gratification in pain, humiliation, or submission. This seeming paradox has puzzled psychologists and neurologists for over a century. How can a sensation designed to warn the body of danger become a source of enjoyment?

Recent research suggests the answer lies in a complex interplay of biology, psychology, and social context. Scientists are finding that pain and pleasure share overlapping neural pathways. They also suggest that the context in which pain occurs can fundamentally alter how the brain processes it. To understand masochism, one must look beyond the physical sensation and examine the mind of the person experiencing it.

The Historical Origins

The word “masochism” has a literary origin. It was coined in 1883 by the German neurologist Richard von Krafft-Ebing. He derived the term from the name of Leopold von Sacher-Masoch, an Austrian writer. Sacher-Masoch was a nobleman and journalist known for writing romantic stories about life in Galicia. He became famous for his novella Venus in Furs, published in 1869.

The story within Venus in Furs follows a man named Severin von Kusiemski. Severin is so infatuated with a woman named Wanda von Dunajew that he asks to be her slave. He encourages her to treat him in progressively degrading ways. Wanda is initially hesitant but eventually embraces the role of the dominant figure. Severin describes his feelings during these ordeals as “suprasensuality.” The story mirrors the author’s own life, as Sacher-Masoch famously signed a contract with his mistress to become her slave for six months.

Krafft-Ebing used Sacher-Masoch’s name to describe a specific psychopathology. In his book Psychopathia Sexualis, he defined masochism as a condition where an individual is controlled by the idea of being completely subject to the will of another person. He noted that this idea is often colored by lustful feeling. Krafft-Ebing considered this a perversion of sexual life.

Later, Sigmund Freud expanded on these ideas. In his 1905 work Three Papers on Sexual Theory, Freud linked masochism with sadism. Sadism is the derivation of pleasure from inflicting pain on others. Freud argued that sadism and masochism were two sides of the same coin. He suggested that a person who enjoys inflicting pain is also capable of enjoying receiving it. He viewed these tendencies as stemming from psychological development in early childhood.

Sexual Masochism Disorder

In modern psychology, the definition has evolved. The Diagnostic and Statistical Manual of Mental Disorders (DSM-5) distinguishes between masochistic sexual interests and a mental disorder. Many people enjoy masochistic elements in their sexual lives without meeting the criteria for a disorder.

The DSM-5 defines Sexual Masochism Disorder specifically. To receive this diagnosis, a person must experience recurrent and intense sexual arousal from being humiliated, beaten, bound, or made to suffer. This pattern must persist for at least six months. Most importantly, these urges or behaviors must cause clinically significant distress or impairment in social, occupational, or other important areas of functioning.

If an individual engages in these behaviors consensually and experiences no distress or dysfunction, they do not have a disorder. This distinction is vital. It separates consensual BDSM practices from pathological conditions. BDSM is an acronym that stands for Bondage and Discipline, Dominance and Submission, and Sadism and Masochism.

The Neuroscience of Pleasure and Pain

One of the central questions regarding masochism is how physical pain can translate into pleasure. A paper published in The Journal of Sex Research by Cara R. Dunkley and colleagues at the University of British Columbia proposes a theoretical model for this phenomenon. The researchers argue that pain in a BDSM context is qualitatively different from accidental pain.

Accidental pain, such as stubbing a toe, triggers a threat response. It signals danger and creates suffering. In contrast, masochistic pain is often described as “good pain.” The researchers suggest that this transformation occurs through “top-down processing.” This is a function where the brain interprets sensory data based on expectations, memories, and context.

When a person consents to pain in a safe environment, their brain regulates the sensation. This regulation involves the release of specific neurochemicals. The researchers point to endogenous opioids and endocannabinoids as key players. Endogenous opioids are the body’s natural painkillers, similar to morphine. Endocannabinoids are chemicals produced by the body that interact with the same receptors as cannabis.

These chemicals are often released during intense physical exertion, leading to phenomena like the “runner’s high.” The researchers suggest a similar process occurs during masochistic activities. The physiological stress of the activity triggers a flood of these mood-enhancing chemicals. This can blunt the sharpness of the pain and induce feelings of euphoria or relaxation.

Dunkley and her team also highlight the role of sexual arousal. Research indicates that sexual arousal can act as a powerful analgesic, or pain reliever. Studies have shown that stimulation of the genitals can raise the threshold for pain tolerance significantly. When arousal is present, the brain may suppress negative emotional reactions to pain. This allows the physical sensation to be experienced as intense but not necessarily aversive.

Altered States of Consciousness

Beyond the chemical reaction, masochism may serve a psychological function. Dunkley and colleagues discuss the concept of “subspace.” This is a colloquial term used in the BDSM community to describe a trance-like state. It is characterized by feelings of floating, peace, and detachment from reality.

The researchers compare this state to “flow” or mindfulness meditation. During intense sensation, an individual’s focus narrows to the immediate present. This can provide a relief from the burdens of self-awareness. Baumeister, a prominent psychologist, described this as “escaping the self.” For a high-functioning or stressed individual, the forced focus of pain can be a welcome vacation from their daily responsibilities and thoughts.

This state appears to reduce activity in the parts of the brain responsible for executive function and self-monitoring. This phenomenon is known as transient hypofrontality. By shutting down the internal monologue, the individual achieves a state of deep relaxation. This paradox—that stress on the body leads to peace in the mind—is a recurring theme in the study of masochism.

Benign Masochism in Everyday Life

Masochism is not limited to the bedroom. A study published in the Journal of Research in Personality in 2023 explores the concept of “benign masochism.” This term refers to the tendency to enjoy negative experiences in a safe context. Common examples include eating extremely spicy food, watching tear-jerker movies, or riding terrifying roller coasters.

Karolina Dyduch-Hazar and Vanessa Mitschke led this research. They sought to determine if people with masochistic traits actively seek out unpleasant stimuli. They conducted studies where participants could choose which videos to watch. The videos varied in emotional tone, ranging from happy scenes to disgusting ones, such as a man vomiting.

The researchers found that individuals who scored high on a scale of benign masochism showed a distinct preference. They were more likely to choose and enjoy videos that were highly arousing and negative. While most people preferred positive content, these individuals found pleasure in the intensity of the negative clips.

The researchers suggest that this behavior stems from a desire for sensation. It also involves the realization that the threat is not real. Dyduch-Hazar explains that the joy comes from realizing that one has been “fooled” by their body. The physical reaction is fear or disgust, but the mind knows there is no actual danger. This creates a safe space to experience intense emotions.

Links to Childhood Trauma

The relationship between childhood experiences and adult sexual preferences is a subject of ongoing investigation. A study published in Sexologies in 2022 by Mike Abrams and his colleagues explored the link between childhood abuse and sadomasochism. They surveyed over 1,000 adults about their histories of psychological, physical, and sexual abuse.

The findings indicated a correlation. Participants who reported childhood abuse were more likely to report sadomasochistic tendencies in adulthood. The type of abuse seemed to matter. Sexual abuse was most strongly associated with more extreme forms of sadism and masochism. Psychological abuse was linked to milder forms.

Abrams notes that this relationship is complex. It does not mean that all survivors of abuse will develop these interests. Nor does it mean that all masochists were abused. However, the data suggests that early experiences can shape how individuals eroticize power and pain.

Healing or Repetition?

This link to trauma raises a critical question: Is engaging in BDSM a harmful repetition of past abuse, or can it be a form of healing? A 2024 paper in the Journal of Sex & Marital Therapy tackled this difficult issue. Ateret Gewirtz-Meydan and her team reviewed existing literature to understand the mechanisms at play.

They found that for some survivors, BDSM offers a way to reclaim control. This process is sometimes called “rescripting.” In a consensual scene, the survivor calls the shots. They set the boundaries and have the power to stop the action at any moment. This can allow them to revisit traumatic feelings from a position of power rather than helplessness.

This transforms a passive experience of victimization into an active experience of survival and pleasure. The researchers note that BDSM emphasizes explicit consent and negotiation. This framework can help survivors learn to establish and enforce boundaries.

However, the researchers also warn of risks. The intense power dynamics can trigger retraumatization. If a scene goes wrong or boundaries are ignored, it can replicate the original abuse. Dissociation is another risk factor. Dissociation is a coping mechanism where a person detaches from reality. While some seek this state for relief, it can be harmful if it prevents an individual from processing their emotions or recognizing when they are unsafe.

The researchers conclude that there is no one-size-fits-all answer. For some, BDSM is a powerful therapeutic tool. For others, it may reinforce negative patterns. Clinicians are encouraged to approach the topic without judgment and to understand the specific motivations of the individual.

“It is crucial for clinicians to approach this topic with sensitivity and avoid pathologizing BDSM practices,” Gewirtz-Meydan told PsyPost. “Understanding the therapeutic potential of BDSM and fostering open, non-judgmental conversations about it can contribute to destigmatizing and empowering trauma survivors.”

Masochism and Chronic Pain

A surprising area of research links sexual masochism with chronic pain conditions. A study published in the European Journal of Pain in 2026 by Annabel Vetterlein and her colleagues investigated this connection. They surveyed a large group of individuals, some of whom identified as BDSM practitioners and some who did not.

The results showed a significantly higher prevalence of chronic pain among the BDSM practitioners. Approximately 47% of the participants with sadomasochistic interests reported living with chronic pain. This is compared to about 29% in the control group. This finding was consistent across both men and women.

The researchers explored why this might be. They found that practitioners of sadomasochism tended to view pain differently than the general population. They were more likely to see pain as a challenge to be overcome rather than a tragedy to be feared. They also scored higher on measures of sensation seeking.

Vetterlein and her team suggest that engaging in masochism might serve as a coping strategy. The experience of acute, voluntary pain during a BDSM scene triggers the release of pain-relieving neurochemicals. This can provide temporary relief from the persistent, involuntary pain of a chronic condition.

This “fighting pain with pain” approach allows the individual to feel a sense of control. Chronic pain often makes people feel helpless. Voluntary pain restores a sense of agency. The researchers also noted that the social aspect of BDSM might play a role. Sharing the experience of pain with a partner can create a sense of belonging and support that is often lacking for chronic pain sufferers.

Personality Predictors

The study by Vetterlein also sought to identify what personality traits predict an interest in masochism. They found three main factors. The first was having chronic pain, as mentioned above. The second was “sensation seeking.” This is a personality trait defined by the search for experiences and feelings that are varied, novel, complex, and intense. Sensation seekers are often willing to take physical and social risks for the sake of such experiences.

The third predictor was a specific attitude toward pain. Individuals who viewed pain as a “challenge” were much more likely to have masochistic interests. This attitude frames pain as a test of endurance and strength. It removes the victimhood often associated with suffering and replaces it with a narrative of achievement.

A Complex Phenomenon

The scientific understanding of masochism has come a long way since Krafft-Ebing first defined it as a perversion. Today, researchers recognize it as a multifaceted phenomenon. It is not simply a desire to be hurt. It is a complex interaction between the brain’s reward systems, an individual’s psychological history, and their social environment.

Physiologically, it exploits the body’s natural response to stress to create pleasure. Psychologically, it offers a way to alter consciousness, escape self-awareness, and potentially cope with trauma or chronic pain. Socially, it relies on strict codes of consent and trust to transform a threat into a game.

Whether manifested as a sexual preference, a taste for spicy food, or a way to manage past trauma, masochism highlights the adaptability of the human mind. It demonstrates that our experience of reality—and specifically of pain—is not a fixed biological fact. It is a subjective experience that we can shape, reframe, and sometimes even enjoy.

People who engage in impulsive violence tend to have lower IQ scores

21 February 2026 at 15:00

A recent comprehensive review of existing scientific research suggests that individuals who engage in impulsive acts of violence tend to score lower on intelligence tests compared to non-violent individuals. The findings provide evidence that lower intellectual abilities may make it harder for people to resolve conflicts peacefully, though intelligence is just one piece of a complex behavioral puzzle. The research was published in the journal Intelligence.

Scientists from various disciplines have spent decades attempting to understand the underlying factors that drive aggression and violence. While past research provides evidence that lower cognitive abilities are linked to general criminal behavior, the specific relationship between intelligence and violent acts against others has remained less clear. This gap in knowledge prompted researchers to look closer at specific types of aggression.

The researchers conducted the new review to figure out if people who commit violent acts consistently show lower intellectual abilities than those who do not. They also wanted to know if this pattern holds true for different components of intelligence, such as verbal skills and nonverbal problem solving. By clarifying this connection, the scientists hoped to gather information that could help design better rehabilitation programs.

“The main motivation for this study was the absence of a systematic analysis assessing whether violence is truly related to the intelligence quotient (IQ) or whether, on the contrary, it is an independent factor,” explained Ángel Romero-Martínez, a professor of psychobiology at the University of Valencia.

“Although prior research has linked low intelligence to general antisocial behavior, there was a significant lack of specialized systematic reviews focusing exclusively on violence against others. We aimed to resolve the debate over whether low IQ is an inherent characteristic of violent behavior (acting as a facilitator) or merely an incidental variable. By conducting this meta-analysis, we were able to demonstrate that violence—particularly reactive violence—is not independent of cognitive abilities, but is significantly influenced by them.”

To explore this topic, the scientists conducted a systematic review and meta-analysis. This type of research involves gathering all previously published studies on a specific subject and combining their data using statistical tools to find an overall trend. The research team searched three major scientific databases, including PubMed and Scopus, along with exploring reference lists to find studies that measured intelligence and assessed aggressive behavior.

Out of more than 5,000 initially identified articles, the researchers removed duplicates and screened the remaining papers for relevance. They ultimately selected 131 empirical studies that met their strict inclusion criteria. For the statistical analysis, they looked at two main sets of data to evaluate group differences and behavioral associations.

The first part of the analysis compared the intelligence scores of 1,860 violent individuals against a control group of 3,888 non-violent individuals. The second part examined the statistical correlation between intelligence and aggressive behavior across a massive pool of 33,118 participants. These aggressive behaviors included a variety of actions, ranging from general hostility and poor anger control to externalizing behaviors and physical assaults.

The intelligence quotient, commonly known as IQ, is a standardized score used to measure a person’s intellectual abilities, with an average score set at 100. In their analysis, the scientists looked at full IQ scores, as well as verbal and nonverbal scores. Verbal intelligence involves the ability to use and understand language, which is important for communication.

Nonverbal intelligence relates to visual problem solving and abstract reasoning without the use of words. The data showed that violent individuals scored significantly lower on full, verbal, and nonverbal intelligence tests compared to the non-violent control groups. This gap in intelligence scores was particularly large when the violent individuals also suffered from a diagnosed mental or personality disorder.

The findings indicate that these cognitive differences are present regardless of gender. The researchers also noted that differences in socioeconomic status did not seem to explain the gap. Many of the included studies accounted for economic and educational backgrounds, and the intelligence gap remained consistent.

“What was truly surprising was just how clear and robust the relationship turned out to be,” Romero-Martínez told PsyPost. “Beyond finding a general link, the most striking aspect was the consistent relationship across all different types of intelligence (verbal and non-verbal IQ).”

When looking at the broader pool of over 33,000 participants, the scientists found a consistent negative correlation between intelligence and violence. This means that as IQ scores decrease, the tendency to engage in violent behavior tends to increase. The correlation coefficients ranged from negative 0.09 to negative 0.20, pointing to a modest but reliable link between lower intelligence and aggressive tendencies.

The research suggests that this lower intelligence is primarily associated with reactive violence. Reactive violence is defined as an impulsive, emotional outburst of aggression in response to frustration or a perceived threat. It differs from proactive violence, which is planned, calculated, and goal oriented.

The scientists propose that lower intellectual abilities might limit an individual’s mental resources for managing stress. Without strong problem solving or verbal skills, a person may struggle to process frustration and navigate conflicts peacefully. In high stress situations, this cognitive limitation can act as a facilitator for impulsive physical or verbal aggression.

“The most important takeaway is that while our study found a correlation between lower IQ and reactive violence, having a lower IQ does not mean a person will be violent,” Romero-Martínez explained. “It is crucial to understand that intelligence is just one factor within a much more complex problem involving biological, social, and psychological variables. Rather than a direct cause, a lower IQ acts as a facilitator.”

“It may limit an individual’s cognitive resources to manage stress or solve conflicts peacefully, making them more prone to impulsive or reactive aggression. Therefore, these findings should be used not to label individuals, but to improve rehabilitation programs by tailoring them to the specific cognitive needs of each person, helping them develop better non-violent coping strategies.”

“The practical significance of these effects should not be interpreted to blame or stigmatize individuals with lower IQ scores,” Romero-Martínez continued. “Instead, the real value of these findings lies in identifying the therapeutic needs of people involved in violent acts.”

“By understanding that cognitive limitations can act as a barrier to peaceful conflict resolution, we can develop more effective intervention programs tailored to individual needs. These results suggest that rehabilitation should focus on providing specific tools and strategies that match the person’s cognitive profile, ultimately helping them to manage frustration and avoid violent behavior more successfully.”

The study does have some limitations that scientists will need to address in future research. For instance, the original studies included in the review used a wide variety of different intelligence tests, which could introduce inconsistencies into the data. Additionally, the researchers only included studies published in English or Spanish, which might restrict how well the results apply to other global populations.

Moving forward, scientists plan to explore other mental factors that might influence the relationship between intelligence and reactive violence. They aim to study how specific mental processes, such as cognitive flexibility and impulse control, play a role in aggressive outbursts.

“As we gain deeper knowledge and a more nuanced understanding of these contributors, we will be better equipped to develop effective strategies to intervene and prevent this type of behavior,” Romero-Martínez said. “We do not want our work to remain solely on a theoretical level. Our ultimate ambition is for our findings to have a real-world impact. By translating this research into practical tools and evidence-based policies, we aim to provide society with better resources to address the root causes of violence and foster safer environments for everyone.”

The study, “Analysis of the intelligence quotient and its contribution to reactive violence: A systematic review and meta-analysis,” was authored by Ángel Romero-Martínez, Carolina Sarrate-Costa, and Luis Moya-Albiol.

Psychologist explains why patience can be transformative

21 February 2026 at 07:00

PsyPost’s PodWatch highlights interesting clips from recent podcasts related to psychology and neuroscience.

On Monday, January 19, 2026, the Hidden Brain podcast, hosted by Shankar Vedantam, featured psychologist Sarah Schnitker. The episode, titled “You 2.0: The Practice of Patience,” challenged the conventional view of patience as a passive trait. Schnitker framed it instead as an active form of emotional regulation that protects mental and physical health.

In the first half of the interview, Vedantam and Schnitker discussed the boundaries of healthy patience. Schnitker explained that virtuous patience occupies a “sweet spot” between the extremes of recklessness and passivity. She noted that true patience often requires courage, citing Martin Luther King Jr.’s approach to civil rights as an example of active waiting rather than resignation.

Research tracking personal goals supported this distinction. Schnitker mentioned a study showing that individuals who balanced patience with courage were able to pursue their objectives effectively. Those who lacked courage often slipped into passivity, failing to make progress despite their willingness to wait.

Later in the episode, the conversation shifted to specific psychological strategies for managing impatience. Schnitker advised against suppressing feelings of frustration, as this often backfires. She suggested that simply acknowledging the emotion and observing it from a third-person perspective can reduce its intensity.

The psychologist also highlighted “cognitive reappraisal,” which involves reframing a situation to find benefits or understand another person’s perspective. Additionally, she discussed how entering “flow states,” such as immersive activities like gaming or cooking, helped people cope with the uncertainty of the COVID-19 pandemic.

Toward the end of the interview, Schnitker explored how a “higher-order purpose” influences the ability to endure difficulty. She detailed a study on adolescents observing Ramadan, which found that fasting for spiritual reasons led to sustained increases in patience. Similar results appeared in research on marathon runners, where training for charity proved more effective for character growth than training for fitness.

The episode concluded with a look at the physical and mental costs of chronic impatience. Schnitker noted that an inability to wait is linked to higher risks of cardiovascular problems and anxiety. She added that impatience is also associated with loneliness and depressive symptoms, likely due to the strain it places on relationships.

You can listen to the full interview here.

Persistent depression linked to resistance in processing positive information about treatment

21 February 2026 at 05:00

A study comparing individuals with persistent depressive disorder to those with episodic major depressive disorder found that those with persistent depression had lower treatment expectations. These individuals also changed their expectations about treatment outcomes less in response to positive reports by other patients. The research was published in Psychological Medicine.

Depression, or major depressive disorder, is a mood disorder characterized by persistent sadness or loss of interest lasting at least two weeks and causing significant impairment in daily functioning. It involves cognitive, emotional, and physical symptoms such as hopelessness, fatigue, sleep disturbances, appetite changes, and difficulty concentrating. Treatment typically involves psychotherapy, pharmacotherapy such as antidepressants, or a combination of both.

However, for many individuals who develop depression, treatment does not result in a remission of symptoms. Furthermore, a substantial share of individuals whose depressive symptoms do go into remission after treatment soon experience a new depressive episode. Overall, in around 20-30% of people who suffer from major depressive disorder, depressive symptoms become chronic, meaning that they persist for at least 2 years. Their condition may then be classified as “persistent depressive disorder” (PDD), as opposed to episodic depression characterized by a depressive episode followed by a remission of symptoms.

Study author Tobias Kube and his colleagues wanted to test two hypotheses. The first was that people with persistent depression adjust their expectations of psychotherapeutic treatment less than people with episodic depression in response to positive information. The second hypothesis was that people with persistent depression alter expectations of future life events less than people with episodic depression.

This study was part of a larger project that examined the role of cognitive immunization in expectation change in depression. Cognitive immunization is a psychological process in which individuals reinterpret or dismiss disconfirming evidence in order to preserve an existing belief or self-schema, thereby preventing belief change despite contradictory information.

Study participants were 156 individuals with major depressive disorder or persistent depressive disorder, recruited from a German university outpatient clinic and several private psychotherapy practices. They received 15 EUR for their participation. Of these individuals, 65.4% met the criteria for episodic major depression, while the remaining 34.6% met the criteria for persistent depressive disorder. Participants’ average age was approximately 35 years, and 67% of them were women.

At the beginning of the study procedure, participants reported their expectations of psychotherapeutic treatments (using the Milwaukee Psychotherapy Expectation Questionnaire) as well as their expectations of future life events (the Future Event Questionnaire). Next, they watched videos of four patients (played by amateur actors). These characters first reported on their symptoms, but then reported on how psychotherapy helped them overcome their problems.

Participants were randomly assigned to one of four experimental conditions that differed in the instructions they received for watching the videos. One group was told to focus on the similarities between themselves and the person in the video. Study authors hypothesized that this would make it difficult for these participants to disregard positive information about psychotherapy voiced by persons in the videos.

Another group was told to focus on the differences between themselves and the people in the video, hopefully making it easier for patients to engage in cognitive immunization and disregard the experiences of the person in the video. The third group was told to focus on the physical appearance of the people in the videos (a control condition). Finally, the fourth group did not receive any instructions before watching the videos.

After watching the videos, participants completed assessments of expectations of psychotherapeutic treatments and future life events again. They also completed an assessment of cognitive immunization (the Cognitive Immunization Against Other People’s Experiences scale), a 7-item assessment examining whether they interpreted the videos negatively, and a brief assessment of recall of video contents. Participants’ depressive symptoms were assessed using the Beck’s Depression Inventory – II.

Results showed that participants with persistent depression had, at the start, less positive expectations about the outcome of psychotherapy than people with episodic depression. After watching the videos, participants with persistent depression adjusted their expectations of psychotherapy much less than people with episodic depression. This difference was particularly pronounced in the group that was given instructions meant to promote cognitive immunization (focusing on differences).

However, contrary to the second hypothesis, the two groups did not differ in how they changed their expectations regarding future life events. Additionally, participants with episodic and persistent depression did not differ in their average level of cognitive immunization scores.

“The results indicate that people with persistent depression have difficulty adjusting their treatment expectations in response to positive information on psychotherapy. This may be a risk factor for poor treatment outcome. The results regarding cognitive immunization suggest that for people with persistent depression, slight doubts about the value of information on the positive effects of psychotherapy may be sufficient to prevent them from integrating this information,” study authors concluded.

The study contributes to the scientific understanding of cognitive functioning specificities of individuals with depression. However, the study did not include a healthy control group. Therefore, it remains unknown whether and how much the observed effects are specific for depression and dissimilar to what would be observable in healthy individuals as well.

The paper, “Differences between persistent and episodic depression in processing novel positive information,” was authored by Tobias Kube, Edith Rapo, Mimi Houben, Thomas Gärtner, Eva-Lotta Brakemeier, Julia Anna Glombiewski, and Winfried Rief.

MCT oil may boost brain power in young adults, study suggests

21 February 2026 at 03:00

A new study published in Physiology & Behavior has found that medium‑chain triglyceride oil can sharpen certain aspects of thinking in young adults, both immediately after a single dose and after a month of daily use.

Medium‑chain triglyceride oil has long been studied for its potential to support brain health in older adults and people with neurological conditions. However, the cognitive enhancing capabilities in healthy young people have remained an open question.

The brain relies heavily on energy, and medium‑chain triglycerides are known for their ability to quickly increase ketone bodies, an alternative fuel source that the brain can use when glucose is low. This metabolic advantage has made medium‑chain triglycerides a popular topic in nutrition and neuroscience research.

Led by I Wayan Yuuki from Ritsumeikan University in Japan, the researchers sought to discover whether the benefits of medium‑chain triglyceride oil extend to young adults who do not have cognitive impairments.

To investigate, Yuuki and colleagues conducted a randomized controlled trial involving 36 healthy young adults (20 males, 16 females), with an average age of 21 years old. Participants were assigned to consume either 12 grams of medium‑chain triglyceride oil or olive oil, which served as the long‑chain triglyceride comparison. The study included two phases: an acute test and a 4‑week daily supplementation period.

In the acute phase, participants completed a series of cognitive tests, consumed their assigned oil mixed with oatmeal, and repeated the tests 75 minutes later.

The researchers found that medium‑chain triglyceride oil did not improve short‑term memory or working memory in the immediate timeframe. However, it did significantly enhance inhibitory control compared to the long-chain triglyceride oil, the mental process that helps people resist distractions and suppress automatic responses. This improvement was measured using the reverse-Stroop task, a classic test of cognitive control whereby individuals must name the word rather than the color that the word is printed in.

“The mechanisms underlying the acute effect of medium‑chain triglyceride on the inhibitory control process remain unknown,” Yuuki and colleagues noted. They hypothesized that “increased ketone body metabolism [in the brain] via increased circulating levels of ketone bodies” may play a role.

The long‑term phase told a different story. After four weeks of daily medium‑chain triglyceride intake, participants showed no improvement in memory or inhibitory control compared to the olive‑oil group.

However, the participants did perform better on a demanding working‑memory task compared to the long-chain triglyceride oil, responding more quickly and consistently during the 2‑back test. The 2-back test involves participants watching a series of images appear one by one and pressing a button whenever the current image matched the one shown two steps earlier. This suggests that regular medium‑chain triglyceride consumption may strengthen the brain’s ability to hold and manipulate information, even if it does not produce immediate changes in this area.

Yuuki’s team concluded, “to the best of our knowledge, this study is the first to demonstrate that, compared to long-chain triglyceride intervention with the same macronutrients, a 4-week daily medium‑chain triglyceride regimen is an effective strategy for improving information processing speed and performance stability in complex working memory, though not in easy working memory, among young adults.”

However, the researchers note that the study has limitations. For instance, participants were told to maintain their usual lifestyle habits, including usual diet, physical activity levels, and sleep quality, during the testing period—but these factors were not measured.

The study, “Both a single dose and a 4-week daily regimen of medium-chain triglycerides boost certain aspects of cognitive function in young adults: A randomized controlled trial,” was authored by I Wayan Yuuki, Kento Dora, Teppei Matsumura, Kazushi Fukuzawa, Yoshino Murakami, Kaito Hashimoto, Hayato Tsukamoto, and Takeshi Hashimoto.

AI art fails to trigger the same empathy as human works

21 February 2026 at 01:00

For centuries, philosophers and psychologists have argued that art does more than please the eye. It serves as a bridge between minds, allowing viewers to step into the experiences of others and develop a shared sense of humanity. A new series of experiments suggests that this bridge may be broken when the artist is a machine.

Researchers found that when people believe a work of art was created by artificial intelligence, they feel less awe. This reduced emotional response leads to a decrease in empathy for the subjects depicted in the work. The findings were published in the Journal of Experimental Social Psychology.

The study explores a psychological chain reaction that begins with the creator’s identity. Art is traditionally viewed as a deeply human act of expression. When we engage with a painting or a poem, we are not just processing visual or linguistic information. We are often attempting to understand the intent and perspective of another person. This process can trigger a sense of awe. Awe is an emotion we feel when we encounter something vast that challenges our current understanding of the world. Psychological theory suggests that awe diminishes our focus on the self and encourages us to feel connected to others.

Artificial intelligence has rapidly entered the creative sphere. Algorithms can now generate paintings, poetry, and music that mimic human styles with high fidelity. Michael W. White, a researcher at Columbia Business School, and his colleague Rebecca Ponce de Leon sought to understand if these AI-generated works function the same way human art does. They wanted to know if the knowledge of an artwork’s origin changes the emotional payoff for the viewer. They hypothesized that without a human mind behind the curtain, the sense of awe would evaporate. Without awe, the subsequent feelings of empathy might fail to materialize.

To test this, the researchers conducted five separate experiments involving over 1,500 participants. The first study took place in the real world rather than a laboratory. Research assistants recruited patrons at two major art museums in a large Northeastern city. These patrons viewed paintings depicting human suffering, such as miners, garment workers, or survivors of natural disasters.

The researchers used a deceptive experimental design to isolate the effect of the label. All the images shown were actually generated by AI. However, half the participants were told the art was created by a human artist named Jamie Kendricks. The other half were told the art was created by an artificial intelligence program. Participants then rated their empathy for the suffering people depicted in the images. The results showed a clear divide. Patrons who believed they were looking at AI art reported lower levels of empathy than those who thought they were viewing human art.

The second study aimed to ensure that the quality of the art was not the deciding factor. This time, the researchers used paintings actually created by human artists. They again manipulated the labels. Some participants were told the human-made art was the work of AI. The pattern held firm. Even when looking at human-created work, the mere belief that it came from a machine reduced the empathy participants felt for the subjects. This confirmed that the bias stems from the viewer’s beliefs about the creator, not the aesthetic properties of the image itself.

In the third study, the team expanded their scope to literary art. Participants read poems about love, nature, or family. The researchers also introduced a specific measure for awe. They asked participants how much wonder or amazement they felt. The data revealed that people experienced less awe when they attributed the poetry to a computer program. Statistical analysis showed that this lack of awe was responsible for the drop in empathy.

The fourth study moved back to a field setting to see if these feelings influenced behavior. The researchers set up a station in the lobby of a large office building. Passersby viewed a painting of disaster survivors. Afterward, they were given the opportunity to donate part of their compensation to charity. Participants who believed the painting was AI-generated reported less awe and empathy. Consequently, they were less likely to donate any money compared to those who believed a human painted the image.

The final study dug deeper into why AI art fails to elicit awe. The researchers measured two specific components of awe: perceived vastness and the need for accommodation. Vastness refers to the sense that something is larger than the self or ordinary experience. Need for accommodation is the feeling that a new experience challenges one’s existing mental structures. Participants viewed a painting of tsunami survivors. Those who thought it was AI-generated rated the work as less vast. They also felt less need to mentally accommodate the work. This lack of cognitive challenge stifled the experience of awe, which in turn suppressed empathy.

These findings align with a growing body of evidence regarding human reactions to AI creativity. A separate meta-analysis authored by Alwin de Rooij and published in Psychology of Aesthetics, Creativity, and the Arts examined nearly 200 effect sizes from various studies. De Rooij found that knowing an image is AI-generated negatively impacts how people process the work. This bias affects deep interpretation and even changes how viewers perceive basic visual features like color and brightness.

Similarly, a study authored by Kobe Millet and colleagues in Computers in Human Behavior found that people perceive AI art as less creative. Millet’s team identified “anthropocentric creativity beliefs” as a driving factor. This is the conviction that creativity is a uniquely human trait. People who hold this belief strongly are more likely to downgrade their appreciation of AI art. They experience less awe when viewing it. White and Ponce de Leon’s work builds on this by showing that the deficit in awe has social consequences. It stops the art from functioning as a tool for moral and emotional connection.

There are limitations to the current research. The studies primarily used art depicting suffering or serious subjects to measure empathy. It is unclear if the same blunting effect would apply to art meant to evoke joy or whimsy. Additionally, attitudes toward AI are shifting rapidly. As younger generations grow up with generative tools, they may not harbor the same biases against machine creation. Their capacity for awe in the face of algorithmic output might differ from the current norm.

Future research could investigate whether different types of art, such as music or film, suffer the same penalty. It could also examine if collaborative works, labeled as human-AI partnerships, manage to preserve the emotional impact. For now, the data suggests a hidden cost to the automation of creativity. We may gain efficiency in generating images, but we risk losing the profound connection that comes from witnessing another human’s expression.

The study, “Less “awe”-some art: How AI diminishes the empathic power of the arts,” was authored by Michael W. White and Rebecca Ponce de Leon.

New research highlights the enduring distinctiveness of marriage

20 February 2026 at 23:00

New research suggests that when given the option between marriage and domestic partnership, same-sex couples in the United States overwhelmingly choose marriage. The findings indicate that marriage retains a distinct and powerful status due to its legal benefits, social clarity, and perceived level of commitment. This study was published in the Journal of Marriage and Family.

Social scientists have debated the status of marriage in American society for decades. One prominent theory, known as deinstitutionalization, suggests that the social norms and rules surrounding marriage are weakening. This theory posits that marriage is becoming less distinct from cohabitation, or living together without being married. For same-sex couples, this question has been particularly complex. Historically excluded from marriage, many couples relied on alternatives like domestic partnerships to secure legal recognition.

Domestic partnerships are legal relationships available in some jurisdictions that grant couples some of the rights and responsibilities of marriage. Before marriage equality was established federally, debates occurred within the gay rights movement regarding the value of marriage.

Some activists argued for assimilation into the tradition of marriage. Others advocated for domestic partnerships as a way to reject what they viewed as a patriarchal or overly traditional institution. The researchers aimed to understand if same-sex couples viewed these two forms of union as equivalent or if they preferred one over the other when both were legally available.

“The percentage of American adults who are married has been steadily declining in recent decades. One question we need to ask ourselves is: is the institution of marriage in decline or perhaps even dying?” explained study author Michael J. Rosenfeld, a professor of sociology at Stanford University.

“One way to think about the question is to ask: what are the alternatives to marriage? Domestic partnership laws in some states, California as an example, were designed to offer the same rights and benefits as marriage. So when given the choice between marriage and domestic partnership, what did couples choose and why?”

For their study, the researchers utilized two primary sources of data. The first was an administrative dataset from the California Secretary of State. This included records of all domestic partnerships filed in California from January 2000 to November 2020. The researchers identified same-sex couples within this data by analyzing the first names of the partners. They used a database from the Social Security Administration to determine the probability of a name being male or female.

Couples were categorized as same-sex if both names had a greater than 95 percent probability of belonging to the same gender. This process identified 48,310 same-sex domestic partnerships. The researchers then compared these registrations with data on same-sex marriages from the American Community Survey. This allowed them to track how the uptake of domestic partnerships changed after same-sex marriage became legal in California in late June 2013.

The second data source was the 2022 “How Couples Meet and Stay Together” survey. This is a nationally representative survey of adults in the United States. The sample included 92 individuals currently in same-sex relationships. These participants were asked explicitly whether they would prefer to be married or in a domestic partnership. They were also asked to write open-ended responses explaining the reasons for their preference.

The analysis of the California administrative data showed a dramatic shift in behavior following the legalization of same-sex marriage. In the years prior to 2013, thousands of same-sex couples registered as domestic partners. However, immediately after the Supreme Court decision that allowed same-sex marriage in California, new domestic partnership registrations dropped significantly.

In the second half of 2013, same-sex couples in California chose marriage over domestic partnership at a ratio of more than 22 to 1. This preference persisted over time. Even years later, between 2016 and 2018, new same-sex marriages outnumbered new domestic partnerships by a ratio of about 13 to 1. This suggests that for the vast majority of couples, domestic partnership was a temporary substitute rather than a preferred alternative.

The national survey results supported the findings from the California administrative data. Among the respondents in same-sex relationships, the preference for marriage was dominant. About three times as many respondents preferred marriage compared to those who preferred domestic partnership.

When asked to explain their reasoning, participants provided clear distinctions between the two institutions. The most common reason for preferring marriage was practical and legal. Respondents noted that marriage offers federal benefits and tax advantages that domestic partnerships do not. They also highlighted portability, which refers to the ability of their legal status to be recognized in other states or countries. Domestic partnerships often lack this recognition outside the jurisdiction where they are performed.

Beyond legal rights, the social and symbolic nature of marriage played a major role. Many participants described marriage as signifying a higher level of commitment than domestic partnership. They viewed domestic partnership as a “marriage-lite” option or a status that implied a less serious bond. Respondents also noted that marriage is a term that is immediately understood by families, friends, and coworkers. This social intelligibility allows couples to communicate the nature of their relationship without needing to explain complex legal terms.

A minority of respondents did prefer domestic partnership. Their reasons often aligned with the theories of those who critique traditional marriage. Some viewed marriage as having too much historical baggage or religious connotation. Others preferred domestic partnership specifically because it felt like a lower level of commitment. This aligns with the idea of a “menu of options,” where couples can choose the legal status that best fits the intensity of their relationship.

The researchers concluded that marriage remains a highly resilient institution. Rather than fading in importance, the distinctiveness of marriage appears to have been reinforced by the fight for marriage equality. Same-sex couples, having studied the institution from the outside for years, appear acutely aware of the specific advantages marriage provides.

“My findings show that same-sex couples overwhelmingly chose marriage over domestic partnership, even though many in the gay rights movement predicted that same-sex couples might prefer the newer and less traditional option of domestic partnership,” Rosenfeld told PsyPost. “Marriage is a durable and flexible institution that is thousands of years old and is not going away.”

As with any study, there are some limitations. California was unique in offering a domestic partnership system that granted nearly all state-level rights of marriage. Most other states did not offer such a robust alternative, making direct comparisons difficult in other regions. Additionally, the number of same-sex couples in the 2022 survey was relatively small, which limits the ability to generalize the survey findings to the entire population with high precision.

Future research could examine how these preferences shift for younger generations who have grown up in a world where marriage equality is the norm. It remains to be seen if the specific legal and cultural distinctions between marriage and other forms of union will continue to hold the same weight as the political context evolves.

The study, “What Happened to the Marriage Alternatives? Same-Sex Couples in the United States and the Distinctiveness of Marriage,” was authored by Michael J. Rosenfeld and Alisa Feldman.

Before yesterdayEnglish

Genetic analysis reveals shared biology between testosterone and depression

20 February 2026 at 21:00

Recent research has identified a substantial genetic overlap between the risk of developing major depressive disorder and the biological regulation of testosterone levels. The analysis suggests that the hereditary factors influencing total testosterone and a specific protein that transports sex hormones share a negative correlation with the genetic risk for depression. These findings were published in the journal BMC Psychiatry.

Depression is a pervasive mental health condition marked by persistent sadness and a loss of interest in daily activities. While environmental and psychological stressors play a role in its development, biological factors are also primary drivers. Researchers have observed that depression occurs roughly twice as often in women as in men. This disparity has led scientists to suspect that sex hormones may influence the disorder. Testosterone is one of the primary sex hormones in humans. It affects various aspects of physical and mental health.

Previous observational studies have attempted to link testosterone levels to depression, but the results have been inconsistent. Some data suggest that low testosterone in men correlates with depressive symptoms. Other studies indicate that high testosterone in premenopausal women is associated with depression. This contradiction makes it difficult to determine if the hormone causes the mood disorder or if the two simply co-occur due to other factors.

To address this uncertainty, researchers are increasingly looking at the genetic blueprints that dictate both hormone levels and depression risk. By examining DNA, scientists can bypass the fluctuations of daily hormone levels to see if the underlying biological architecture is shared. Wen Lu, a researcher at The First Affiliated Hospital of Xi’an Jiaotong University in China, served as the first author on a study investigating this genetic connection. The correspondence for the study was addressed to Jian Yang, a researcher at the same institution.

The team focused on three specific traits related to testosterone. The first trait was total testosterone, which refers to the aggregate amount of the hormone in the blood. The second trait was sex hormone-binding globulin, or SHBG. This is a protein that latches onto testosterone and transports it throughout the body. When testosterone is bound to SHBG, the body cannot immediately use it. The third trait was bioavailable testosterone. This represents the fraction of the hormone that is either free-floating or loosely bound, making it easily accessible for the body’s tissues to use.

The researchers utilized data from genome-wide association studies to conduct their analysis. A genome-wide association study involves scanning the genomes of many people to find genetic variations associated with a particular disease or trait. For the depression data, Lu and colleagues used a massive dataset from the Psychiatric Genomics Consortium. This dataset included genetic information from hundreds of thousands of individuals of European ancestry. For the testosterone and SHBG data, they accessed the UK Biobank, a similarly large biomedical database.

The team employed a statistical method known as linkage disequilibrium score regression to estimate genetic correlations. This technique allows researchers to determine if the genetic variants associated with one trait correlate with the variants associated with another. They also used a method called MiXeR. This tool helps estimate the total number of genetic variants shared between two traits, regardless of whether the correlation is positive or negative.

The analysis revealed a negative genetic correlation between major depressive disorder and total testosterone. This means that the genetic variants associated with higher levels of total testosterone tend to be associated with a lower risk of depression. A similar negative correlation appeared between depression and SHBG. However, the researchers found a negligible genetic correlation between depression and bioavailable testosterone. This lack of connection for the bioavailable form was unexpected given the other results.

Beyond simple correlations, the study uncovered an extensive polygenic overlap. The term polygenic refers to a trait that is influenced by many different genes rather than just one. The researchers estimated that approximately 49 percent of the genetic variants that influence total testosterone also influence the risk of major depressive disorder. For SHBG, roughly 32 percent of the variants overlapped with depression risk. This suggests that the biological pathways regulating these hormones are deeply intertwined with the pathways involved in mood regulation.

To identify the specific locations on the genome responsible for this overlap, the team used a statistical framework called the conjunctional false discovery rate. This method identified a range of 28 to 79 genomic loci shared between depression and the testosterone traits. A genomic locus is a specific fixed position on a chromosome where a particular gene or genetic marker is located.

One specific locus stood out in the analysis. A gene known as NT5C2 was simultaneously associated with total testosterone, SHBG, and major depressive disorder. NT5C2 encodes an enzyme that helps maintain the balance of nucleotides within cells. Nucleotides are the basic building blocks of DNA and RNA. Previous research has linked this gene to other psychiatric conditions, such as schizophrenia. Its presence here suggests it may play a broad role in brain function and mental health.

The researchers also performed a functional annotation to understand what these shared genes actually do in the body. They looked at the biological pathways where these genes are most active. A biological pathway is a series of actions among molecules in a cell that leads to a certain product or change. The analysis showed that the genes shared by depression and testosterone traits were predominantly enriched in immune-related pathways.

This connection to the immune system aligns with existing theories about depression. Scientists have long noted that people with depression often exhibit signs of inflammation and immune system activation. Glucocorticoids are steroid hormones that regulate immune responses. They are released by the hypothalamic-pituitary-adrenal axis, or HPA axis. The HPA axis is the body’s primary stress response system.

The study authors propose that the HPA axis acts as a bridge between testosterone regulation and depression. Long-term stress can dysregulate the HPA axis. This dysregulation leads to abnormal release of glucocorticoids. Testosterone acts as a negative feedback inhibitor for this system. This means testosterone helps tell the HPA axis to calm down. If the genetic factors regulating testosterone are faulty, the HPA axis may remain overactive. An overactive HPA axis is a known contributor to the development of depressive symptoms.

There are limitations to this study that require consideration. The genetic data used in the analysis came primarily from populations of European ancestry. Genetic associations found in one ancestral group do not always translate perfectly to others. The findings may not fully apply to populations in Asia, Africa, or other regions.

Another limitation involves the complexity of age and sex differences. The relationship between testosterone and mood can change as people age. It also differs fundamentally between males and females. The current genetic analysis pooled data in a way that makes it difficult to parse these specific demographic nuances.

The study also focused on genetic predisposition rather than real-time hormone levels. While genetics provide a blueprint, environmental factors heavily influence actual hormone levels and mental health status. Knowing that a genetic correlation exists does not predict with certainty who will develop depression based on their hormonal genetics.

Future research will need to explore the biological mechanisms of the identified genes. The discovery of the NT5C2 gene’s involvement provides a concrete target for laboratory experiments. Scientists must determine exactly how this gene influences both hormone transport and mood regulation in brain cells.

The findings also open new avenues for understanding why some patients do not respond to standard antidepressants. Current treatments primarily target neurotransmitters like serotonin. If a subset of depression cases is driven more by hormonal and immune dysregulation, different treatment strategies might be necessary.

This research reinforces the idea that mental health disorders are systemic issues involving the whole body. The separation between “brain” disorders and “hormonal” disorders is becoming increasingly blurred. By mapping the shared genetic architecture, scientists are slowly assembling a more complete picture of human physiology.

The study, “Exploring the shared genetic architecture between testosterone traits and major depressive disorder,” was authored by Wen Lu, Xiaoyan He, Huan Peng, Pu Lei, Jing Liu, Yuanyuan Ding, Bin Yan, Xiancang Ma, and Jian Yang.

Artificial sweeteners spark more intense brain activity than real sugar

20 February 2026 at 19:00

Your brain may be able to tell the difference between a diet soda and a regular sugary drink, even if they taste exactly the same to you. New research suggests that artificial sweeteners trigger distinct and more intense electrical activity in the brain compared to natural sugar, even when the sweetness levels are identical. These findings were published recently in the journal Foods.

The human desire for sweet foods is innate and powerful. This evolutionary drive has led to a modern health crisis characterized by excessive sugar consumption. In response, the food industry has developed numerous sugar substitutes. These additives promise the sensory pleasure of sweetness without the caloric cost.

While these products are popular, scientists are still working to understand how the human body and brain react to them. Most research focuses on how these sweeteners affect metabolism or appetite hormones. Less is known about how the brain processes the actual sensation of tasting them.

Sensory perception is usually measured in two ways. The first is explicit measurement, which involves asking a person to describe what they are tasting. This method relies on the participant’s ability to articulate their experience. It can be unreliable because people have different vocabularies and subjective baselines for sweetness. The second method is implicit measurement. This approach looks at physiological data that the participant cannot control. It offers a window into the body’s automatic reactions.

Xiaolei Wang and a team of researchers from Zhejiang University in China chose to use implicit measurement for this investigation. They utilized electroencephalography, commonly known as EEG. An EEG is a non-invasive test that records electrical patterns in the brain. It involves placing a cap with small metal discs called electrodes on a person’s scalp. These electrodes detect the tiny electrical charges that result from the activity of brain cells. This technology allows scientists to observe brain activity with millisecond-level precision.

The researchers recruited 30 healthy university students for the experiment. All participants were right-handed and between the ages of 18 and 30. They had no history of smoking or alcohol consumption that might dull their sense of taste. Two participants were later excluded from the data because of excessive movement or eye blinking, which creates noise in the EEG signal. This left a final group of 28 participants.

The study aimed to answer two specific questions regarding sweetness. First, the team wanted to see how the brain reacts to different amounts of the same sweetener. Second, they wanted to see if the brain reacts differently to chemically distinct sweeteners that have been balanced to taste equally sweet. This condition is known as being “iso-sweet.”

To test the first question, the researchers prepared solutions of sucrose, which is common table sugar. They created four different concentrations: 1%, 3%, 5%, and 7%. Sucrose served as the baseline for natural sweetness.

To test the second question, the researchers selected three popular non-nutritive sweeteners. Non-nutritive sweeteners are substances that provide sweetness but few or no calories. The team used erythritol, sucralose, and stevioside. They carefully adjusted the concentration of these three solutions so that human tasters would perceive them as having the same sweetness intensity as the 7% sucrose solution.

The experiment took place in a quiet, temperature-controlled laboratory. Participants sat wearing the EEG caps and followed a strict “sip and hold” protocol. For each trial, the participant rinsed their mouth with water. They then received a 5 milliliter sample of a sweet solution. They held the liquid in their mouths without swallowing for five seconds. After this period, they spat the sample out and rinsed again. There was a 60-second rest period between each taste test to allow the brain signals to return to a neutral baseline.

The results regarding the concentration of sugar were unexpected. One might assume that a stronger concentration of sugar would produce a stronger electrical signal in the brain. The data showed the opposite effect. The 1% sucrose solution elicited a stronger EEG signal than the 5% or 7% solutions.

The researchers propose that this decrease in signal strength may be due to neural adaptation. When a stimulus becomes too strong, the brain sometimes dampens its response to avoid being overwhelmed. This is a phenomenon often seen in sensory processing, where the system becomes saturated. The brain essentially turns down the volume on the incoming “loud” taste signal.

The results regarding the different types of sweeteners were equally revealing. All three non-nutritive sweeteners produced stronger brain responses than the 7% sucrose solution they were designed to mimic. Even though a person might say the stevioside solution tasted just as sweet as the sugar solution, their brain activity told a different story.

Stevioside elicited the most robust neural response of all the substances tested. Erythritol caused the second strongest reaction. Sucralose also triggered a response that was statistically distinct from sugar. This indicates that the brain can differentiate between the chemical nature of sweeteners. It perceives them as different stimuli even if the conscious mind perceives the same level of sweetness.

The researchers also analyzed specific types of brain waves. They looked at alpha waves, which are typically associated with wakeful relaxation. They also analyzed delta waves. The non-nutritive sweeteners caused a surge in power in both these frequency bands. This suggests that artificial sweeteners might engage more neural resources than natural sugar.

The study also mapped where this activity was happening in the brain. The most active areas were the frontal and parietal-occipital regions. The frontal region is often involved in emotional regulation and decision-making. The parietal-occipital region, located toward the back of the head, is heavily involved in processing sensory information.

The timing of the brain’s reaction also varied by sweetener. The response to stevioside began early and remained strong throughout the tasting period. In contrast, the responses to erythritol and sucralose peaked and then faded relatively quickly. The response to natural sugar was slower to start and weaker overall.

These findings suggest that artificial sweeteners stimulate the brain in a way that is fundamentally different from sugar. The increased electrical activity might reflect the brain trying to process a chemical structure that does not perfectly match the biological expectation of “sweet energy.” The mismatch between the sweet taste and the lack of calories is a known area of interest in nutrition science.

There are limitations to this study that affect how the results should be interpreted. The sample size was relatively small. The participants were all young university students, so the results may not apply to older adults or children. Additionally, the participants did not swallow the solutions. Swallowing engages additional sensory receptors in the throat and digestive system that contribute to the overall experience of eating.

The researchers note that this technology could have practical applications. Food scientists could use EEG to objectively measure how consumers respond to new products. This would reduce the reliance on subjective taste tests. Understanding the neural “fingerprint” of different sweeteners could help companies design low-sugar foods that mimic the brain response of real sugar more closely.

Future research will likely explore these differences further. Scientists may look at how these brain responses correlate with feelings of satisfaction or cravings. They might also investigate if the brain learns to process these sweeteners differently over time with regular consumption. For now, the study provides evidence that to the human brain, sugar is not just a taste. It is a specific chemical signal that substitutes have yet to perfectly replicate.

The study, “EEG-Based Analysis of Neural Responses to Sweeteners: Effects of Type and Concentration,” was authored by Xiaolei Wang, Guangnan Wang, and Donghong Liu.

Parental math anxiety linked to lower quantitative skills in young children

20 February 2026 at 17:00

A trio of studies published in Psychological Science, Scientific Reports, and the Journal of Experimental Child Psychology provides evidence regarding the development of early mathematical skills in preschool children. The findings suggest that the age at which a child grasps the concept of cardinality is a strong predictor of their first-grade readiness. The research also indicates that the complexity of parental speech and parental math anxiety significantly influence the development of these essential quantitative abilities.

Understanding the foundations of mathematics is a primary goal for developmental psychologists. While many children learn to recite numbers by rote memory, this does not necessarily mean they understand quantity. The conceptual leap occurs when a child understands cardinality. This is the principle that the last number word used when counting a set of objects represents the total quantity of that set.

Scientists sought to determine if the timing of this conceptual insight matters for future academic success. They investigated whether acquiring this knowledge early in preschool provides an advantage over acquiring it just prior to kindergarten. The researchers also aimed to identify specific home and parental factors that facilitate or hinder this learning process.

“This is part of a larger project funded by the National Institutes of Health that is focused on identifying the home (e.g., parent math anxiety) , school (e.g., classroom instruction), and child (e.g, working memory) factors that children’s early conceptual learning of the quantities represented by number word and numerals, as well as overall math achievement,” said study author David C. Geary, a Curators’ Distinguished Professor at the University of Missouri.

The first study, published in 2018, followed 141 children from the beginning of preschool through the first grade. The scientists assessed the children’s quantitative skills at multiple time points. To measure cardinality, the researchers utilized a “give-a-number” task.

In this procedure, a researcher asked the child to provide a specific number of objects, such as “give me three fish.” Children who do not understand cardinality might grab a random handful. Those who grasp the concept count out the exact number requested.

By the first grade, the researchers assessed the children’s number-system knowledge. This involves understanding how numbers relate to one another, such as knowing that the number seven is composed of a six and a one.

The data revealed that the age at which a child became a “cardinal-principle knower” was highly predictive of their later abilities. Children who understood this concept at the beginning of preschool showed significantly higher number-system knowledge in first grade. This advantage existed even after controlling for intelligence and executive function.

Executive function refers to a set of mental skills that include working memory, flexible thinking, and self-control. These cognitive processes allow children to focus attention and manage information.

The results suggest that simply understanding cardinality before kindergarten is not the only factor for success. The timing of this insight appears to be significant. Early mastery allows children to build a deeper understanding of number relationships before formal schooling begins.

Building on this, a second study published in 2025 examined what drives improvements in cardinal knowledge. The scientists focused on the home environment, specifically the nature of conversations between parents and children. They hypothesized that the quality of “number talk” would predict gains in a child’s understanding.

The researchers recruited 86 preschoolers and their primary caregivers. They assessed the children’s cardinal knowledge at the beginning of the school year and again five months later. To measure parental engagement, the scientists used a structured observation task.

Pairs of parents and children were asked to plan a pretend birthday party. They were given specific items like plates and goodie bags to encourage discussion about quantities. The researchers recorded these interactions and transcribed the conversations.

They coded the speech for complexity. Simple number talk involved basic counting or naming small quantities. Complex number talk involved comparing the size of two sets or labeling larger sets of objects.

The analysis indicated that the complexity of parental number talk predicted gains in the children’s cardinal knowledge. Children whose parents engaged in more complex quantitative discussions showed greater improvement over the five-month period. Simple counting activities did not show the same predictive power.

This study also highlighted the role of the child’s own cognitive abilities. Children with stronger executive functions tended to make larger gains in their understanding of number words. This suggests a bidirectional relationship where both the home environment and the child’s cognitive capacities contribute to learning.

The third study, published in 2026, investigated potential barriers to this early development. The researchers explored the impact of parental mathematics anxiety. They sought to understand if a parent’s fear or nervousness regarding math correlated with their child’s quantitative skills at the start of preschool.

This study involved 130 children and their parents. The parents reported their levels of math anxiety using a sliding scale. They also completed assessments of their own math and reading achievement.

The researchers assessed the children’s quantitative competencies using a battery of tasks. These included counting, recognizing numerals, and the give-a-number task. The scientists also measured the children’s executive functions.

The findings provided evidence that higher parental math anxiety is associated with lower complex quantitative knowledge in children. This relationship was particularly evident for cardinal knowledge. Children of highly anxious parents tended to perform worse on these conceptual tasks.

The data revealed an interaction between the child’s cognitive abilities and the parent’s anxiety. Children who possessed strong executive functions and had parents with low math anxiety demonstrated the highest competency levels. This suggests that a child’s ability to focus can amplify the benefits of a low-anxiety home environment.

“Children’s early home experiences with numbers, counting, and related topics contributes to their math development,” Geary told PsyPost. “Parents who avoid these activities place their children at risk of falling behind their peers.”

The study also clarified the nature of parental math anxiety. Parents who reported high anxiety also tended to have lower math achievement scores themselves. They reported lower confidence in their abilities and a tendency to avoid numerical information.

This implies that math anxiety is not an isolated emotional state. It appears to be part of a broader constellation of traits including lower subject proficiency and avoidance behaviors. These factors likely combine to create a home environment that is less conducive to early math learning.

Self-reported home numeracy activities, such as playing number games, did not strongly predict the children’s skills in this specific study. This suggests that general reports of activities may not capture the specific types of interactions that drive learning. The specific quality of engagement, as seen in the party-planning study, seems to be more significant than the frequency of general activities.

The studies — like all research — come with some caveats. The studies identify correlations but cannot definitively prove causation. While the longitudinal designs offer strong evidence, unmeasured genetic or environmental factors could play a role.

“We don’t fully understand the specific activities that promote early math development, but progress is being made,” Geary said.

The reliance on self-reports for some parental measures introduces potential bias. Parents may overestimate the frequency of educational activities.

“There was no relation between parents math anxiety and the math activities they reported engaging in, suggesting self-reports of these activities are not reliable,” Geary noted. “They stated they avoided math information and so they likely over-reported how much they engaged in these activities with their children.”

Future research should focus on experimental interventions. Scientists need to determine if coaching parents to use complex number talk can directly improve children’s outcomes. It would also be beneficial to explore if reducing parental anxiety leads to better math readiness in their children.

“We’re still collecting data,” Geary told PsyPost. “In the end, we’ll look at parent, classroom, and child factors that contribute to key aspects of math development over the two years of preschool.”

Despite these limitations, the implications for parents and educators are practical. The research suggests that early exposure to number concepts is beneficial. It indicates that the quality of interaction matters more than rote counting.

Parents might encourage their children by discussing the relationships between numbers. Conversations could involve comparing quantities, such as discussing which pile has more blocks. Labeling the total number of items in a set appears to be particularly helpful.

The findings also suggest that addressing parental attitudes is necessary. Parents who feel anxious about math may inadvertently limit their child’s exposure to complex concepts. Building parental confidence could be a key step in supporting the next generation of learners.

The study, “Early Conceptual Understanding of Cardinality Predicts Superior School-Entry Number-System Knowledge,” was authored by David C. Geary, Kristy vanMarle, Felicia W. Chu, Jeffrey Rouder, Mary K. Hoard, and Lara Nugent.

The study, “Complexity of parental number talk predicts preschoolers’ gains in cardinal knowledge,” was authored by David C. Geary, Emine Simsek, Sara Gable, Jordan A. Booker, Lara Nugent, and Mary K. Hoard.

The study, “Parental mathematics anxiety predicts children’s cardinal number word and numeral knowledge at preschool entry,” was authored by David C. Geary, Sara Gable, Mary K. Hoard, and Lara Nugent.

What is a femcel? The psychology and culture of female involuntary celibates

20 February 2026 at 16:00

In recent years, the term “incel”—short for involuntary celibate—has become a fixture in public discourse, almost exclusively associated with men. The male incel subculture is frequently linked to online misogyny, violent rhetoric, and real-world acts of aggression. However, a parallel but distinct phenomenon has emerged that remains largely obscured from the mainstream view: the “femcel.”

Female involuntary celibates, or femcels, are women who feel they are unable to form romantic or sexual relationships despite wishing to do so. Unlike their male counterparts, whose grievances often turn outward toward women and society, femcels tend to direct their frustrations inward.

New academic research is beginning to explore this understudied population, revealing a complex subculture defined by loneliness, specific standards of beauty, and a digital evolution from support groups to ironic aesthetic movements.

The Origins and Ideology of the Femcel

The concept of involuntary celibacy was actually coined by a woman in the 1990s as an inclusive term for lonely people of all genders. Over time, the male faction radicalized into the modern incel movement, effectively pushing women out of the definition. In response, women formed their own spaces. According to research published in Archives of Sexual Behavior, femcels congregate in online communities to discuss their exclusion from the romantic marketplace.

Hannah Rae Evans and Adam Lankford, scientists at the University of Alabama, analyzed thousands of posts from femcel discussion forums. They found that these women express three distinct types of sexual frustration: unfulfilled desires to have sex, a lack of available partners, and unsatisfying sexual activities. This suggests that for femcels, the issue is not merely a lack of sexual access, but a deep dissatisfaction with the quality and availability of intimate connection.

“When I first heard the term ‘femcel,’ I was immediately interested and wanted to know more about their communities. When I began exploring their online subculture, I saw so many different directions that our research could take because this is such an understudied population,” Evans told PsyPost.

A central pillar of femcel ideology is the “Pink Pill.” This is a gender-flipped version of the “Red Pill” philosophy found in male-dominated online spaces. While the Red Pill claims to reveal the truth about female nature, the Pink Pill focuses on the harsh realities of female existence within a patriarchal society. Specifically, it emphasizes “lookism,” or the belief that society values women almost entirely based on their physical beauty.

Scholars Debora Maria Pizzimenti and Assunta Penna explored this dynamic in their ethnographic study of the Reddit community r/Vindicta. They published their findings in the Italian Sociological Review. Their work describes how femcels view beauty not as subjective, but as an objective, measurable form of power.

In these communities, members often categorize women into a hierarchy. “Stacys” are highly attractive women who hold high sexual market value and receive good treatment from society. “Beckys” are average women. Femcels place themselves at the bottom, believing their physical features prevent them from accessing the privileges afforded to attractive women.

This belief system is rigid. Users often discourage “coping” mechanisms, such as the idea that personality matters more than looks. Instead, they focus on “looksmaxxing,” or the pursuit of surgical and cosmetic enhancements to improve their social standing.

The Psychology of Isolation and Inhibition

While the online rhetoric can be harsh, the underlying psychological profile of a femcel appears to be one of profound isolation. Lola Cassidy, a researcher at the National College of Ireland, conducted a quantitative study comparing women who identify as femcels to a control group of women who do not. Her findings provide statistical evidence regarding the mental health struggles within this community.

Cassidy found that femcels reported significantly higher levels of loneliness compared to non-femcel women. The study utilized the UCLA Three-Item Loneliness Scale, and results indicated that many femcels selected the highest possible scores for feelings of isolation. This supports the qualitative observations that these online spaces serve as a refuge for women who feel entirely disconnected from social life.

In addition to loneliness, the study revealed that femcels exhibit higher levels of social inhibition. Social inhibition involves the avoidance of social situations and the suppression of emotional expression due to a fear of rejection or judgment. Cassidy suggests that this inhibition may predict a stronger preference for online social interactions. For femcels, the internet acts as a necessary buffer, allowing them to communicate without the immediate fear of face-to-face rejection.

The research also highlighted a link between femcel identity and “problematic internet use.” This term refers to compulsive online activity that interferes with daily life. Femcels in the study scored higher on measures of problematic internet use than the control group. They were more likely to use social media for emotional regulation. This implies that for these women, online forums are not just a pastime but a primary coping mechanism for managing negative emotions.

Femcels Versus Incels: A Distinct Difference

A common misconception is that femcels are simply the female equivalent of incels. While they share the core experience of involuntary celibacy and use similar terminology, their reactions to this state differ significantly. Male incels frequently externalize their anger. They often blame women for their celibacy, viewing access to women’s bodies as a right that has been denied to them. This worldview has been linked to real-world violence and mass shootings.

Femcels, in contrast, tend to internalize their frustration. Evans and Lankford noted in their study that femcel discussions contained significantly less support for aggression and violence than what has been reported regarding male incels. While extreme views exist, the researchers are not aware of any mass violence committed by individuals identifying with the femcel community.

Ruby Ling, in a thesis for the University of Alberta, conducted a comparative analysis of incel and femcel subreddits. She found that while both groups use derogatory language to describe the opposite sex, the nature of their grievances is different. Incels often dehumanize women, reducing them to their biological functions. Femcels, conversely, often express a desire for companionship and emotional intimacy rather than just sexual access.

Ling also noted that femcels tend to view their condition as fluid. While incels often believe their genetic fate is sealed at birth, femcels discuss how life events—such as aging, motherhood, or weight gain—can push a woman into “femceldom.” This suggests a view of celibacy that is tied to a woman’s fluctuating social capital rather than an innate biological defect.

Furthermore, Ling’s research highlights the hostility femcels face from male incels. Male incel communities frequently deny the existence of female involuntary celibacy, arguing that women can always find a sexual partner if they lower their standards. This rejection forces femcels to create their own separated spaces, where they often discuss the “misogyny-laden obstacles” they face in dating.

Radical Feminism and the Femcel

The relationship between femcels and feminism is complicated. On the surface, femcel rhetoric often aligns with radical feminist theory. Both groups acknowledge the existence of a patriarchy that oppresses women. Both groups often criticize liberal feminism, particularly regarding the sexual revolution and hookup culture, which femcels argue benefits men while leaving women unfulfilled and used.

Ling’s analysis found that femcel forums often function as women-only spaces where members discuss male violence and the objectification of women. Themes of men feeling entitled to women’s bodies are common in both radical feminist and femcel discourse. However, femcels rarely identify as feminists. They often feel that mainstream feminism ignores the specific struggles of “ugly” or socially awkward women.

Pizzimenti and Penna’s research on the r/Vindicta community supports this. They observed that while the community is a “Pink Pill” space that focuses on female strategies for survival, it is often antifeminist in tone. The focus is individualistic rather than collective. The goal is not to dismantle the patriarchy but to navigate it successfully by maximizing one’s aesthetic value. This reflects a pragmatic, survivalist approach rather than a political movement.

The Rise of “Femcelcore” and Heteronihilism

In recent years, the femcel identity has migrated from obscure forums to mainstream platforms like TikTok, undergoing a significant transformation. Researchers Jacob Johanssen and Jilly Boyce Kay describe this shift in the European Journal of Cultural Studies. They distinguish between the “traditional” femcel—who is genuinely isolated and excluded—and the “aesthetic” femcel, or “femcelcore.”

Femcelcore is characterized by a specific digital aesthetic. It often involves imagery of messy bedrooms, references to “sad girl” culture (such as the novels of Ottessa Moshfegh or the music of Lana Del Rey), and an ironic embrace of “toxic femininity.” This new iteration is less about the inability to find a partner and more about a performance of alienation.

Johanssen and Kay argue that this trend represents a form of “heteronihilism.” This concept describes a deep disappointment with heterosexual culture. It is a mood of fatalistic apathy. Women engaging in femcelcore may not be strictly celibate, but they express a sense of giving up on the promise of romantic fulfillment. They view heterosexuality as inevitably disappointing but inescapable.

This aligns with findings from Ada Jussila of the University of Turku, who analyzed the subreddit r/femcelgrippysockjail. Her work, published in WiderScreen, details how this community uses irony and memes to process mental health struggles and gendered expectations.

Jussila notes that the community is divided. Traditional femcels, who define their status by physical unattractiveness and rejection, sometimes clash with second-wave femcels who view the identity as a mental state or aesthetic. The latter group often engages in “ironic misandry”—exaggerated hatred of men used for comedic effect. This allows them to vent frustration while maintaining a safe distance from their true emotions.

Community Dynamics and Gatekeeping

The tension between these different definitions of “femcel” leads to intense gatekeeping within the community. Pizzimenti and Penna observed that forums like r/Vindicta have strict rules to maintain their focus. They explicitly state that the space is for “unattractive women” and forbid “coping” posts that try to deny the importance of beauty.

Jussila also observed this dynamic. In the communities she studied, users frequently debated who qualifies as a “real” femcel. Traditional members often try to exclude those they perceive as “average” women who are merely going through a rough patch in dating. This “othering” process helps the core group maintain a sense of identity, but it also creates a hostile environment for newcomers.

Despite this, these communities offer a rare source of support. For women who feel completely invisible to society, finding a group that acknowledges their reality is powerful. Ling’s research notes that these forums provide validation for experiences that are otherwise dismissed. Women share advice, support each other through trauma, and offer a space to vent without judgment.

Mental Health and Well-being

The mental health implications of the femcel identity are significant. Cassidy’s study found that femcels reported significantly lower mental well-being compared to the control group. This lower well-being was statistically predicted by their high levels of loneliness and social inhibition.

However, the relationship between internet use and well-being is complex. While femcels exhibit problematic internet use, Cassidy found that this usage did not directly correlate with their loneliness in the same way it did for the control group. This suggests that for femcels, online communities might not be the cause of their loneliness, but rather a symptom or a refuge.

Jussila’s work supports this, noting that the “femcelcore” aesthetic often glamourizes mental illness or dissociation. This can be a double-edged sword. It provides a language for expressing pain, but it may also trap users in a cycle of negativity. Johanssen and Kay warn that the “heteronihilist” mood of these spaces is anti-political. It encourages resignation rather than action, potentially deepening the user’s sense of hopelessness.

Conclusion

The femcel phenomenon is a multifaceted reflection of modern pressures regarding beauty, relationships, and digital connection. It is not simply a female version of the incel movement, though it shares roots in the experience of involuntary celibacy. Research indicates that femcels are driven by internalized distress, loneliness, and a belief that they have failed to meet societal standards of womanhood.

From the rigid beauty hierarchies of r/Vindicta to the ironic despair of TikTok’s femcelcore, these women are navigating a world where they feel they do not belong. While they generally avoid the violent radicalization seen in male incel communities, their struggles with mental health and social isolation are profound.

Scientists Evans and Lankford emphasize that further study of this population is necessary. Understanding femcels can help researchers identify the factors contributing to radicalization and develop support strategies for those suffering from severe social isolation. As the definition of the term continues to evolve, it remains a powerful lens through which to view the changing dynamics of gender and connection in the digital age.

New study sheds light on the psychological burden of having a massive social media audience

20 February 2026 at 15:00

For many aspiring artists and musicians, achieving fame on social media represents the ultimate career goal. A new study published in Administrative Science Quarterly challenges this assumption, revealing that gaining a massive following often triggers a psychological struggle that threatens the creator’s well-being. The research identifies a phenomenon called “audience entanglement,” describing how creators must actively manage their deep emotional connection to their audience to prevent burnout and sustain their careers.

The creator economy has grown rapidly in recent years. It is now a multi-billion dollar industry where individuals can earn a living by sharing their work directly with fans. Academic research and popular advice have historically viewed the attainment of a large audience as the endpoint of a creator’s journey. The prevailing logic suggests that once a creator builds a substantial fanbase, they have succeeded.

The researchers behind the new study argue that this view is incomplete. They suggest that gaining an audience is not an endpoint but rather a new starting point that introduces unique challenges. While traditional gig workers interact with clients or algorithms, digital creators interact with thousands of anonymous strangers. The scientists sought to understand how these independent workers make sense of this relationship. They wanted to know how creators manage the pressure of constant visibility once they have achieved widespread appeal.

“Creative workers often seek a large audience for their creations, in large part because it makes doing creative work financially viable. Probably due to the necessity of a large audience for sustaining this type of work, there is a large body of research that illuminates what makes ideas, products, and services gain widespread appeal,” said study author Julianna Pillemer, an assistant professor of Management and Organizations at New York University’s Stern School of Business.

“We know very little, however, about what happens to creators after this type of large audience is attained. Our study reveals what happens to creators after their work has gained widespread appeal on social media platforms – a phenomenon we call ‘audience entanglement’ – and offers tangible strategies for how they may cultivate a healthier relationship with their audience and capture meaning from their work.”

To explore this dynamic, the scientists conducted an inductive qualitative study. This means they did not start with a hypothesis to prove but instead gathered data to develop a new theory. They focused on two distinct groups of independent creative workers: visual artists on Instagram and musicians on YouTube.

The sample consisted of 54 creators who had already achieved significant success. The visual artists had an average of over 500,000 followers, while the musicians averaged nearly 280,000 subscribers. These numbers placed the participants in the top tier of users on their respective platforms. The researchers conducted a total of 74 in-depth interviews. (This included follow-up interviews with a portion of the participants to track how their experiences evolved over time.)

During these interviews, the participants shared detailed career histories. They described high and low points, their emotional reactions to platform metrics, and their strategies for coping with online interactions. The researchers analyzed these transcripts to identify common themes and psychological states.

The central finding of the study is the deep interrelatedness between the creator and their audience. The researchers found that this relationship becomes a persistent consideration in how the creator approaches their work. It is not something they can easily ignore.

For most creators, this phenomenon initially manifests as “dysfunctional entanglement.” In this state, the creator feels an oppressive dependence on audience reactions. They become hypersensitive to comments, likes, and view counts. They begin to rely on these external metrics as their primary source of validation.

This dysfunctional state also involves a struggle with platform volatility. Social media platforms use complex algorithms to determine which posts get seen. These algorithms change frequently and unpredictably. Creators in a state of dysfunctional entanglement feel they are at the mercy of these hidden rules. They experience distressing emotions when a post fails to perform well. Some participants described this feeling as being in a “chamber of despair” or feeling like a “crumpled up ball of paper.”

When entanglement is dysfunctional, creators often question the meaning of their work. The pressure to please the audience and the fear of losing relevance can make the creative process feel hollow. Consequently, many participants viewed their work on the platform as unsustainable. They expressed desires to quit or find ways to exit the platform economy entirely.

“In most academic research and standard advice, gaining a large audience is seen as the endpoint for creators – a sign that one has ‘made it’ and the real work stops there,” Pillemer told PsyPost. “However, our research reveals that this prevailing view is far from complete– rather a whole new set of challenges begin, that threaten to undermine their creative endeavors altogether.”

“Specifically, we find that creators often experience a sense of ‘dysfunctional’ audience entanglement – a distinctly negative psychological state – even amidst extreme objective success. They use terms like ‘chamber of despair’ and ‘crumpled up ball of paper’ to capture how their audience makes them feel. The psychic pain of this entangled state can make their work seem unsustainable, leading many to want to stop creating altogether. Thus, the very thing that many creators desire most – a large admiring audience for their work – ironically can be the thing that tanks their creative endeavors.”

The study found that some creators manage to shift out of this negative state. They do so by developing specific “entanglement management strategies.” The researchers identified three primary tactics that help creators regain a sense of control.

The first strategy is distancing from audience input. This involves setting strict boundaries around how and when the creator engages with the platform. For example, a creator might choose not to read comments for 24 hours after posting. Others might assign a trusted friend or partner to filter messages, shielding themselves from abusive or unhelpful feedback.

The second strategy is depersonalizing audience critique. This is a cognitive shift where the creator changes how they interpret negative feedback. Instead of viewing a mean comment as a true reflection of their worth, they reframe it. They might view the commenter as someone who is simply having a bad day. They might also decide that the critique is about the specific piece of work, not about them as a human being.

The third strategy is distilling personal standards. This involves a conscious effort to refocus on one’s own artistic ideals. The creator reminds themselves why they started creating in the first place. They prioritize work that meets their own internal standards of quality, rather than creating content solely to chase viral trends.

By utilizing these strategies, creators can move toward a state of “functional entanglement.” This does not mean they disconnect from their audience entirely. Instead, they achieve a balanced dependence. They appreciate their audience and value the connection, but they do not let it dictate their emotional stability.

In a state of functional entanglement, creators are better able to accept platform volatility. They acknowledge that algorithms are unpredictable and that fluctuations in views are a business reality, not a personal failure. This shift allows them to experience uplifting emotions again. They can capture meaning from positive interactions with fans without being crushed by the negative ones. Most importantly, this functional state makes the work feel sustainable in the long run.

The researchers noted that moving between these states is often a cycle. A creator might achieve functional entanglement, only to slip back into dysfunctional patterns when a platform changes its features or when they face a wave of harassment. Maintaining a healthy relationship with the audience requires ongoing effort and the active application of management strategies.

“Because prior research treats having a large audience for one’s creations as an endpoint, we also don’t know much about how creators manage one successfully,” Pillemer explained. “We find that some creators develop strategies to manage their relationship to their audience – distancing themselves from audience input, depersonalizing audience critiques, and distilling their personal standards —that shift them to a state of functional entanglement, or feeling a healthier relationship with their audience that makes their work feel more sustainable in the long run.”

There are some limitations to this study to consider. The research relies on the participants’ own descriptions of their experiences. Self-reported data can sometimes be influenced by how individuals wish to present themselves. Additionally, the study focused specifically on visual artists and musicians. The dynamics might differ for other types of influencers, such as those in the fitness, beauty, or gaming sectors. The platforms studied, Instagram and YouTube, also have specific mechanisms for feedback that might differ from platforms like TikTok or Twitch.

Future research could explore how personality traits influence a creator’s susceptibility to dysfunctional entanglement. It would also be useful to investigate whether these dynamics appear in other professions that are becoming increasingly digitized and dependent on public ratings. As more workers enter the gig economy and rely on digital platforms, understanding the psychological toll of audience dependence will become increasingly important.

“Many influencers seek to go viral above all else, without any concern for how it might impact them psychologically,” Pillemer added. “Indeed, some of the most successful creators – for example, MrBeast – actively encourage approaches that will likely lead to dysfunctional entanglement. Our work challenges the view that simply obtaining more and more followers is uniformly beneficial, and offers a more balanced view on how these workers can see their work as sustainable.”

“The creator economy is booming, with around 30 million creators being paid for their content. Goldman Sachs Research estimates that the value of the creator economy will reach half a trillion dollars by 2027. This research provides concrete strategies for both successful and budding creators to help to manage their relationship to their audience and platform more effectively. By encouraging a more functional relationship with their audience, creators can avoid burnout and continue to find meaning and value in their work.”

The study, “Audience Entanglement: How Independent Creative Workers Experience the Pressures of Widespread Appeal on Digital Platforms,” was authored by Julianna Pillemer, Spencer Harrison, Chad Murphy, and Yejin Park.

Viral AI agent OpenClaw highlights the psychological complexity of human-computer interaction

20 February 2026 at 05:00

PsyPost’s PodWatch highlights interesting clips from recent podcasts related to psychology and neuroscience.

On Thursday, February 12, the Lex Fridman Podcast, featured Peter Steinberger, a software engineer and the creator of the viral AI agent OpenClaw. The episode touched on the psychological evolution of human-computer interaction and how developers are attempting to imbue artificial intelligence with personality to foster deeper connections with users.

The conversation relevant to psychology took place roughly an hour and a half into the recording, beginning with a concept Steinberger calls the “soul” of software. He described his experiment with a file named “soul.md,” a document designed to define the core values and personality traits of his AI agent.

Steinberger explained that while automation is efficient, it often lacks the style and affection that human builders infuse into their work. To counter this dry efficiency, he encouraged his AI to rewrite its own template files to include humor and warmth, creating a user experience that feels “cozy” rather than purely functional.

The most significant moment in this segment occurred when the AI wrote a spontaneous disclaimer about its lack of long-term memory. The agent stated that while it would not remember writing the words in a future session, the words were still its own, a sentiment Steinberger noted gave him “goosebumps” due to its resemblance to genuine consciousness.

Later in the episode, the dialogue shifted toward the emotional toll this technology takes on professional programmers. Steinberger acknowledged that many developers are mourning the loss of the “flow state,” a mental zone of energized focus that they previously achieved through the manual act of writing code.

He compared this transition to the industrial revolution, noting that just as manual laborers felt threatened by the steam engine, coders feel a loss of identity as their core skills are automated. Fridman added that for many, writing code was a source of deep personal meaning that is now being replaced by artificial systems.

Despite this grief, Steinberger argued that professionals must reframe their self-image from “programmers” to “builders” who orchestrate these new tools. He emphasized the importance of empathy for those fearful of displacement while highlighting that this shift allows for the creation of more complex and ambitious projects.

The discussion concluded with real-world examples of how these AI agents are assisting small business owners and individuals with disabilities. Steinberger shared a story of a user whose disabled daughter felt empowered by the technology, illustrating how AI can increase cognitive accessibility despite the societal growing pains.

You can listen to the full interview here.

Moving in boosts happiness for older couples, but marriage adds no extra spark

20 February 2026 at 03:00

Moving in with a romantic partner later in life appears to boost life satisfaction for both men and women, yet formalizing that union through marriage does not provide an additional psychological benefit if the couple is already living together. A new analysis of long-term data suggests that contrary to popular theories regarding gender and emotional reliance, men do not suffer more than women after a relationship breakdown or gain more from entering a new partnership. These findings were published in the International Journal of Behavioral Development.

Social scientists and psychologists have spent decades trying to understand how romantic relationships influence mental health. A prevailing theory suggests that men and women experience these transitions differently due to the way they structure their social lives. Societal norms often encourage women to maintain wide networks of emotionally intimate friendships.

In contrast, men are frequently socialized to rely heavily on their romantic partners for emotional support. This dynamic implies that men should theoretically experience a steeper decline in well-being when a relationship ends, as they are losing their primary source of emotional connection. Conversely, men should theoretically experience a sharper increase in well-being when entering a relationship, as they regain that vital support system.

Iris V. Wahring, a researcher affiliated with Humboldt University Berlin and the University of Vienna, led a team to investigate whether these gendered patterns hold true for middle-aged and older adults. The research team included Urmimala Ghose, Christiane A. Hoppmann, Nilam Ram, and Denis Gerstorf. They sought to determine if age plays a moderating role in how relationships impact happiness.

Theories on aging, such as Socioemotional Selectivity Theory, propose that as people recognize their time is limited, they prioritize a smaller circle of emotionally meaningful relationships. If older men become more comfortable seeking support from family and friends, or if older adults generally become more resilient, the predicted gender gap in relationship transitions might disappear.

To test these ideas, the researchers analyzed data from the Health and Retirement Study. This is a massive, ongoing project funded by the National Institute on Aging that tracks the lives of Americans over the age of 50. The study allows researchers to observe the same individuals over many years, providing a motion picture of their lives rather than a static snapshot. Wahring and her colleagues focused on a sample of 2,840 participants who provided data between 2006 and 2022. They looked specifically at changes in depressive symptoms and life satisfaction following three distinct relationship transitions: separation, moving in with a partner, and getting married.

A major challenge in this type of research is separating the effect of the relationship transition from the personality of the people involved. For example, people who get married might generally be happier or wealthier than those who do not. To solve this, the researchers used a statistical technique called propensity score matching. They created pairs of virtual twins within the data.

If a man in the study moved in with a partner, the researchers compared him to another man in the study who was of the same age, education level, and relationship history but who remained single. By comparing these matched individuals, the researchers could be more confident that any changes in well-being were caused by the relationship transition itself rather than preexisting differences.

The study first examined the levels of emotional support participants felt they received from friends and family. The data confirmed the traditional sociological view that men generally perceive themselves as having less external emotional support than women. Men reported lower levels of support from their social networks outside of their romantic partnerships. Based on the theoretical framework, this deficit should have made men more vulnerable to separation and more responsive to new relationships.

The results regarding separation were unexpected. The researchers hypothesized that breaking up would lead to increased depressive symptoms and decreased life satisfaction, particularly for men. The data did not support this. The analysis showed no significant decline in well-being for men or women following a separation. This finding challenges the narrative of the fragile older male who collapses without a partner.

It suggests that older adults may possess a high degree of emotional resilience or that they are effective at mobilizing other social resources when necessary. It is also possible that for some older adults, the end of a relationship brings a sense of relief that counterbalances the stress of the loss.

When the researchers looked at positive relationship transitions, the picture became clearer but still defied gender stereotypes. Moving in with a partner was associated with a measurable increase in life satisfaction. This benefit was shared equally by men and women. The gender of the participant did not predict who would be happier after cohabitating. The anticipated “male bonus”—where men would gain more happiness because they were filling a larger emotional void—did not appear in the data. The psychological lift provided by a new live-in partnership appears to be a universal benefit in this age group, regardless of gender.

The study also dissected the nuances of marriage versus cohabitation. For couples who were not living together, moving in and getting married at the same time produced the same boost in life satisfaction as simply moving in together. For couples who were already cohabiting, the act of getting married did not result in any additional increase in life satisfaction or decrease in depressive symptoms.

The data indicates that the daily reality of sharing a life, a home, and a routine is the primary driver of well-being. The legal and ceremonial act of marriage does not appear to add a distinct layer of psychological protection or happiness on top of the benefits already provided by cohabitation.

These results reflect changing societal norms. In previous decades, cohabitation without marriage was often stigmatized, and marriage served as the gateway to social approval and financial stability. As cohabitation has become a normative part of the relationship landscape, even for older adults, the unique power of marriage to alter one’s sense of well-being seems to have diminished. The “marriage benefit” often cited in older sociological literature may actually be a “living together benefit.”

The researchers cautioned that there are limitations to how these findings should be interpreted. The sample was drawn from the United States, a Western, industrialized nation with specific cultural ideas about romance and individualism. The results might not apply to cultures where family structures are different or where marriage carries heavier social weight.

Additionally, the study focused on heterosexual relationships. The dynamics of separation and marriage could function differently in the LGBTQIA+ community, where friendship networks often play a unique role in providing support that biological families or traditional institutions may not.

Another important caveat involves the measurement of emotional support. While men reported less support than women on average, the overall levels of support in the sample were quite high. Few participants reported having no support at all. It is possible that the gendered effects of relationship transitions only become visible in populations that are truly isolated. If a man has absolutely no friends and loses his wife, he might indeed suffer more than a woman in a similar position. However, within the general population of older Americans who participate in health studies, such extreme isolation is rare enough that it did not drive the aggregate results.

The timing of the measurements also matters. The Health and Retirement Study interviews participants every two years. This means the data captures the medium-term state of mind of the participants. It is possible that in the immediate weeks or months following a breakup or a wedding, men and women do react differently. A short-term spike in grief or joy might fade by the time the next survey takes place. The findings represent a stabilized view of how these life events reshape the emotional landscape over time.

This research implies that the emotional lives of older men and women are more alike than different. Both genders benefit from the companionship and intimacy of living with a partner. Both genders show surprising resilience when those relationships end. The outdated idea that men are emotionally helpless without a wife is not supported by this data. Men in the 21st century, at least those in this demographic, appear capable of navigating the complex waters of romance and loss with a stability that rivals that of women.

Future research will need to examine these dynamics in more diverse populations. Understanding how economic status intersects with relationship transitions is essential. Moving in together might boost life satisfaction partly because it pools financial resources, a benefit that would be more pronounced for low-income individuals.

Additionally, researchers might look at the quality of the relationships being formed or dissolved. Leaving a high-conflict marriage is likely to improve well-being, while leaving a supportive one would harm it. These qualitative distinctions are difficult to capture in large numerical datasets but are vital for a complete picture of human relationships.

The study, “Relationship transitions and well-being in middle-aged and older men and women,” was authored by Iris V. Wahring, Urmimala Ghose, Christiane A. Hoppmann, Nilam Ram, and Denis Gerstorf.

Scientists discover a liver-to-brain signal that mimics exercise benefits

20 February 2026 at 01:00

Researchers have identified a specific liver protein produced during exercise that strengthens the brain’s protective barrier and improves memory in aging mice. This finding suggests a potential pharmaceutical avenue to mimic the cognitive benefits of physical activity for older adults who cannot exercise. The study was published in the journal Cell.

For decades, medical professionals have recognized that aerobic exercise promotes brain health. Physical activity stimulates the birth of new neurons and improves learning capabilities. It also helps reduce inflammation in the brain. However, this prescription is often difficult to fill for the elderly or those with physical disabilities.

Frailty or cardiovascular conditions can make vigorous exercise impossible. This limitation created a scientific need to understand the biological signals that exercise triggers in the body. If researchers could identify these signals, they might be able to bottle the benefits in a drug.

Saul A. Villeda and his colleagues at the University of California, San Francisco, have spent years investigating how factors circulating in the blood influence aging. The research team previously demonstrated that transferring blood plasma from exercising mice into sedentary mice could transfer the brain benefits of that exercise.

They identified an enzyme called GPLD1 as a key factor. This enzyme is produced by the liver and released into the bloodstream after physical activity. Gregor Bieri, a postdoctoral scholar in Villeda’s lab and the study’s lead author, led the effort to understand how this liver enzyme communicates with the brain.

The researchers faced a biological puzzle regarding GPLD1. This protein is an enzyme, which is a molecule that acts as a catalyst for chemical reactions. However, GPLD1 does not cross the blood-brain barrier. This barrier is a tightly packed layer of cells lining the blood vessels in the brain. It acts as a security checkpoint that prevents toxins and pathogens in the blood from entering the brain tissue. Since GPLD1 remains in the bloodstream, the team reasoned it must be acting on the blood vessels themselves rather than entering the brain directly.

To investigate this hypothesis, the team utilized genetic sequencing data to look at proteins found on the surface of cells in the brain’s blood vessels. They were looking specifically for proteins that anchor themselves to the cell membrane in a way that makes them susceptible to being cut loose by GPLD1. This search led them to a protein called TNAP. The researchers found that levels of TNAP are low in young, healthy mice but rise considerably as the animals age.

The team discovered that high levels of TNAP on the blood vessels are detrimental to the blood-brain barrier. When TNAP is abundant, the barrier becomes permeable and leaky. This allows harmful substances to seep into the brain, causing inflammation and impairing the function of neurons. The researchers determined that the job of GPLD1 is to act like a pair of molecular scissors. It circulates in the blood, finds the TNAP anchored to the brain’s blood vessels, and snips it off. This process reduces the amount of active TNAP, which in turn helps restore the integrity of the blood-brain barrier.

To confirm this mechanism, the researchers conducted a series of experiments on mice. They first used a genetic technique to artificially increase the levels of TNAP in the brain blood vessels of young mice. These young mice soon developed leaky blood-brain barriers and performed poorly on memory tests, effectively mimicking the conditions of old age. This experiment demonstrated that excess TNAP is a driver of cognitive decline.

Next, the researchers treated aged mice with the liver enzyme GPLD1. They injected the mice with genetic instructions that caused their livers to produce more of the enzyme, simulating the effects of exercise. The results showed that the enzyme successfully trimmed away the excess TNAP. Consequently, the blood-brain barrier became less leaky. The aged mice also showed improvements in cognitive function. They were better able to recognize new objects and navigate mazes compared to untreated aged mice.

“We were able to tap into this mechanism late in life, for the mice, and it still worked,” said Bieri.

The team also explored a more direct pharmaceutical approach. Instead of using the liver enzyme, they administered a drug known to inhibit the activity of TNAP. This drug, called SBI-425, effectively blocked the action of the protein without needing the enzyme to cut it off. The aged mice treated with this inhibitor showed similar improvements in memory and barrier function to those treated with GPLD1. This finding indicates that targeting TNAP directly could be a viable strategy for drug development.

The researchers then extended their investigation to Alzheimer’s disease. They utilized a strain of mice genetically engineered to develop sticky plaques in the brain and memory problems associated with Alzheimer’s. When these mice were treated with GPLD1 or the TNAP inhibitor, they showed a reduction in the density of these plaques. They also exhibited improved behavior, such as building better nests, which is a standard measure of well-being in mice.

These findings highlight the importance of the connection between the liver and the brain. It appears that the liver acts as a sensor for physical activity and sends a chemical dispatch to the brain’s security system to tighten its defenses. When that signal is weak due to a lack of exercise or aging, the defenses crumble. Restoring that signal or blocking the damage it normally prevents can reverse some aspects of aging.

“This discovery shows just how relevant the body is for understanding how the brain declines with age,” said Villeda.

While the results are promising, there are necessary caveats to consider. The study was conducted in mice, and human biology may not respond in the exact same way. However, the researchers did analyze tissue samples from deceased humans. They found that the brains of people with Alzheimer’s disease had higher levels of TNAP in their blood vessels compared to healthy individuals. This correlation suggests the mechanism is conserved across species.

Additionally, the blood-brain barrier is a complex structure. While trimming TNAP appears to fix leaks, there may be other consequences to manipulating this protein that are not yet fully understood. TNAP has other functions in the body, including roles in bone mineralization.

Any potential drug would need to be specific enough to target the brain’s blood vessels without causing side effects in the skeleton or other organs. The drug used in this study, SBI-425, does not cross the blood-brain barrier, which is beneficial as it acts only on the vessel walls and not inside the brain tissue itself.

Future research will need to determine the safety and efficacy of TNAP inhibitors in humans. The team also plans to investigate if there are other proteins on the blood-brain barrier that the liver enzyme might target. For now, this study provides a mechanical blueprint for how the simple act of running can physically reinforce the walls that protect our minds.

The study, “Liver exerkine reverses aging- and Alzheimer’s-related memory loss via vasculature,” was authored by Gregor Bieri, Karishma J.B. Pratt, Yasuhiro Fuseya, Turan Aghayev, Juliana Sucharov, Alana M. Horowitz, Amber R. Philp, Karla Fonseca-Valencia, Rebecca Chu, Mason Phan, Laura Remesal, Shih-Hsiu J. Wang, Andrew C. Yang, Kaitlin B. Casaletto, and Saul A. Villeda.

Big five personality traits predict fertility expectations across reproductive age

19 February 2026 at 23:00

A study in the Netherlands identified groups of people differing in the trajectories of their expectations that they will become parents across their reproductive age. Forty-four percent of men and 40% of women tended to have stable expectations that they will have children in the future, lasting until their mid-to-late 30s. Individuals with stable parenthood expectations tended to score higher on agreeableness and extraversion. The paper was published in the Journal of Personality.

During the past half-century, fertility rates have been decreasing worldwide. People, on average, have fewer children, and increasing numbers experience childlessness. Major factors contributing to this include longer time spent in education and the resulting postponement of marriage, new fertility regulation technologies that made having children a choice, and wider acceptance of individual rights to make life choices.

Consequently, fertility rates across the developed world have been below replacement level for the past several decades, producing a population decline in many countries. This has drawn significant research interest toward fertility intentions—people’s plans and expectations about having children.

Research shows that, in young adulthood, most individuals plan to have children, and often more than one. However, as time passes, many fail to enact these expectations, staying childless well into middle age or having fewer children than initially planned.

Study author İlayda Özoruç and her colleagues wanted to explore how fertility expectations develop and change in men and women living in the Netherlands across the reproductive age period. They also wanted to explore how the trajectories of change in these expectations are associated with individual personality traits.

The authors note that voluntary childlessness (i.e., being childfree) has become increasingly socially acceptable in the Netherlands. Consequently, they expected greater variation between individuals in how they imagine their future regarding having children.

The researchers analyzed data from the Longitudinal Internet Studies for the Social Sciences (LISS) panel. The LISS is an ongoing panel study that started in 2007. It includes 5,000 households in the Netherlands comprising nearly 7,500 individuals.

The data used in this study came from 5,231 participants who were non-parents at the start of the study and participated between 2008 and 2022. The average participant provided responses in 3 to 4 data collection waves during the studied period. Sixty-eight percent participated in more than one data collection wave. Fifty-two percent of participants were women. During this period, roughly 15% of participants became parents for the first time.

The authors used data on participants’ fertility expectations (“Do you think you will have children in the future?”), Big Five personality traits (obtained using the International Personality Item Pool), and parenthood status (tracking the transition to the first child).

The researchers categorized participants based on how their parenthood expectations developed during the study period. The largest category for both men and women was the normative category (44% of men and 40% of women). In this category, people started with expectations that they would have children in the future. As time passed, the majority became parents (84% of men and 92% of women in this group). For those who did not become parents, expectations of having children started dropping sharply in their mid-30s, so that by age 42, almost none expected to have children in the future.

The smallest category, including 8% of men and 7% of women, was the childfree category. People in this group mostly started out not expecting children or being unsure, and became increasingly certain they would not have children as time passed. Only a small percentage of people from this category eventually became parents (6% of men and 12% of women).

The remaining categories showed more complex trajectories, such as general uncertainty about future children (“uncertain trajectory”), switching expectations between yes, no, and unsure (“switching trajectory”), or starting uncertain but gaining expectation later (“postponement trajectory”). Women also showed a unique “abandoning trajectory” (15%), where expectations to have children existed at age 18 but dropped to “unsure” or “no” starting around age 25.

When personality traits were considered, results showed that both men and women from the normative group (stable expectation to become parents) scored higher on agreeableness and extraversion compared to the uncertain and childfree groups.

For men specifically, those in the normative group also tended to have lower neuroticism and higher conscientiousness and openness compared to the uncertain and childfree groups.

However, for women, personality differences were fewer. Unlike men, women’s levels of neuroticism, openness, and conscientiousness did not differ significantly between the expectation trajectories.

The study contributes to the scientific understanding of the development of fertility expectations throughout the reproductive age. However, it should be noted that all participants were from the Netherlands. Results regarding individuals from other countries and cultures might differ.

The paper, “Big Five Personality Traits and Trajectories of Fertility Expectations Across the Reproductive Age Period,” was authored by İlayda Özoruç, Jeroen Vermunt, Katya Ivanova, and Manon van Scheppingen.

Neural signatures of impulsivity and neuroticism are largely distinct in youth

19 February 2026 at 21:00

New research published in Molecular Psychiatry suggests that two major personality traits associated with alcohol use—impulsivity and neuroticism—stem from largely distinct brain networks. While both traits heighten the risk for problematic drinking in adolescents, the biological pathways driving that risk appear to be different. This finding supports the concept that there are multiple neurological routes that can lead to similar risky behaviors in youth.

Impulsivity and neuroticism are well-known psychological risk factors for substance abuse, yet it remains unclear how these traits manifest in the brain’s complex wiring. Previous studies often focused on isolated brain regions rather than the broad communication patterns across the entire brain.

The research team aimed to determine whether these two personality traits share a common neural foundation or if they operate through separate mechanisms. By mapping these connections, the scientists hoped to clarify how different vulnerabilities contribute to the onset of alcohol use during the critical developmental period of adolescence.

“We are interested in understanding how risk factors in adolescence contribute to substance use problems later in life,” explained study authors Annie Cheng and Sarah Yip, a postdoctoral associate and an associate professor, respectively, at the Yale School of Medicine.

“Traits like impulsivity (acting without thinking) and neuroticism (tending to experience more negative emotions) are known to increase alcohol-use risk, but we still don’t fully understand what is happening in the brain that connects these traits to later outcomes—especially during adolescence, when risky behaviors and many mental health conditions first emerge. Our study uses brain connectivity patterns to better understand how these personality traits may relate to substance use at a biological level.”

The study analyzed data from approximately 1,100 young adults who participated in the IMAGEN study, a large multi-center genetic-neuroimaging project in Europe. The participants were 19 years old at the time of the brain scans. During the magnetic resonance imaging (MRI) sessions, the participants performed a specific activity known as the Stop Signal Task. This task measures inhibitory control by asking participants to respond to a stimulus but withhold their response when a specific signal appears.

The researchers utilized a technique called functional connectivity analysis. This method examines how different regions of the brain communicate with one another by measuring the synchronization of their activity over time. Using a machine learning approach called connectome-based predictive modeling, the team sought to identify specific patterns of brain connectivity that could predict a participant’s levels of impulsivity and neuroticism.

Impulsivity was measured using the Substance Use Risk Profile Scale, which assesses a person’s tendency to act without foresight. Neuroticism, the tendency to experience negative emotions like anxiety or moodiness, was assessed using the NEO Five-Factor Inventory. The scientists also looked at the participants’ alcohol use behaviors using a standardized screening test.

The researchers found that the brain networks predicting impulsivity were fundamentally different from those predicting neuroticism. The neural signature for impulsivity was primarily characterized by connections involving motor and sensory areas of the brain. This suggests that the biological basis of acting without thinking is closely tied to the systems that manage physical movement and sensory processing.

In contrast, the neural signature for neuroticism was much more distributed throughout the brain. It involved a wide array of networks, including those responsible for emotion regulation, self-reflection, and executive control. Specifically, the neuroticism network included connections in the default mode network, the frontoparietal network, and subcortical regions.

A direct comparison of the two networks showed very little overlap. Only about 3 percent to 4 percent of the functional connections were shared between the impulsivity and neuroticism models. This indicates that while these traits often occur together in individuals, they arise from distinct neurobiological architectures.

“We were somewhat surprised by how little overlap there was between the brain networks associated with impulsivity and neuroticism,” Cheng and Yip told PsyPost. “These traits often co-occur and jointly predict a wide range of psychiatric conditions, so one might expect them to share more common neural architecture. Instead, we found that their underlying brain connectivity patterns were largely distinct. This raises an important question for future research: why are impulsivity and neuroticism implicated together in so many forms of psychopathology if their neural networks are largely distinct?”

The researchers also compared these personality networks to a previously identified brain network associated with alcohol-use risk. Both impulsivity and neuroticism networks showed some overlap with the alcohol-risk network, sharing about 10 percent to 20 percent of their connections. However, the specific connections that overlapped were different for each trait.

This finding provides biological evidence for the psychological concept of equifinality. Equifinality is the idea that different developmental pathways can lead to the same outcome. In this context, one teenager might be at risk for alcohol misuse due to motor-sensory disconnects related to impulsivity, while another might be at risk due to emotional regulation issues related to neuroticism.

“Even though impulsivity and neuroticism are both linked to alcohol-use risk, they appear to be supported by largely distinct brain networks,” the researchers explained. “In other words, two teens might be at risk for similar behaviors, but for different underlying neurobiological reasons. This supports the idea that there isn’t just one pathway to risky behavior—there are multiple routes that can lead to the same outcome.”

To ensure the results were robust, the scientists tested their models on an independent group of participants. They used data from the Adolescent Brain Cognitive Development (ABCD) Study, which includes children in the United States. Even though the ABCD participants were younger, aged 11 to 12, the models still showed a significant association, suggesting the findings are generalizable across different populations.

“The practical significance of our findings lies in improving our understanding of how vulnerability may develop into risky behavior over time,” Cheng and Yip said. “We show that impulsivity and neuroticism—two traits that both increase alcohol-use risk—are supported by largely distinct brain networks and relate to alcohol use risk via different brain connections.”

“This suggests that prevention and intervention efforts may need to differ depending on whether a young person’s risk is driven more by difficulty regulating behavior or difficulty managing negative emotions. In other words, our findings support a more personalized, mechanism-based approach to reducing adolescent risk, rather than assuming a one-size-fits-all approach.”

As with all research, there are limitations. The sample from the IMAGEN study primarily consisted of individuals of White European ancestry. It remains to be seen if these specific brain patterns apply equally to more diverse populations. Additionally, the study relied on self-reported questionnaires to measure personality traits, which may not always capture the full complexity of an individual’s behavior.

The brain scans were collected while participants were performing a specific task rather than while resting. It is possible that the brain networks might look different when the brain is not engaged in a structured activity. The study was also cross-sectional, meaning it looked at a single point in time, so it cannot definitively prove that the brain patterns caused the behaviors.

It is also important to note that these brain patterns represent risk factors rather than deterministic predictions. “These brain patterns do not mean a teenager is ‘wired’ for alcohol problems,” Cheng and Yip noted. “Instead, they point to systems that may contribute to vulnerability, helping guide prevention efforts toward strengthening regulatory skills, emotional coping, and supportive environments.”

“A key next step is to examine how these brain networks change over time and how they relate to future alcohol use or other mental health outcomes. As large longitudinal studies like ABCD continue to follow participants into later adolescence and adulthood, we will be able to test how these neural signatures evolve across development. Ultimately, we hope this work contributes to more targeted and developmentally informed prevention approaches.”

“One broader implication is that adolescent risk-taking behavior reflects highly complex interactions among developing brain systems, personality traits, and environmental factors,” the researchers said. “By studying these systems in large, diverse samples, we can move toward a more nuanced understanding of youth development that recognizes individual variability.”

The study, “Impulsivity and neuroticism share distinct functional connectivity signatures with alcohol-use risk in youth,” was authored by Annie Cheng, Sarah Lichenstein, Bader Chaarani, Qinghao Liang, Marzieh Babaeianjelodar, Steven J. Riley, Wenjing Luo, Corey Horien, Abigail S. Greene, Tobias Banaschewski, Arun L. W. Bokde, Sylvane Desrivières, Herta Flor, Antoine Grigis, Penny Gowland, Andreas Heinz, Rüdiger Brühl, Jean-Luc Martinot, Marie-Laure Paillère Martinot, Eric Artiges, Frauke Nees, Dimitri Papadopoulos Orfanos, Luise Poustka, Sarah Hohmann, Nathalie Holz, Christian Baeuchl, Michael N. Smolka, Nilakshi Vaidya, Henrik Walter, Robert Whelan, Gunter Schumann, R. Todd Constable, Godfrey Pearlson, Hugh Garavan, and Sarah W. Yip.

New psychology research reveals how repetitive thinking primes involuntary memories

19 February 2026 at 19:00

New research provides evidence that the repetitive thoughts occupying a person’s mind can directly influence the spontaneous memories they experience later. This phenomenon, termed “preoccupation priming,” suggests that focusing on a specific topic creates a tendency for the brain to retrieve personal memories related to that subject. The study was published in the scientific journal Consciousness and Cognition.

Psychologists have studied involuntary autobiographical memories for many years. These are memories of past personal events that pop into consciousness without any deliberate attempt to retrieve them. They often occur during mundane activities, such as walking down the street or washing dishes.

Previous research indicated a strong connection between a person’s current life concerns and the content of these spontaneous memories. For instance, diary studies showed that individuals going through a breakup or starting a new diet often reported involuntary memories centered on those specific themes.

However, these earlier studies were primarily correlational. They relied on participants recording their daily experiences, which made it difficult to determine the direction of cause and effect. It was unclear if thinking about a topic caused the memories, or if having the memories caused the person to think about the topic more frequently.

Researchers John H. Mace and Emily Chow sought to resolve this ambiguity by conducting a controlled laboratory experiment. They aimed to establish a causal link by manipulating what participants thought about and then measuring the subsequent effect on their involuntary memories. The goal was to demonstrate that the cognitive act of repetitive thinking serves as a mechanism that primes the memory system.

The study included 60 undergraduate students as participants. The researchers randomly assigned these individuals to one of two groups: a repetitive thinking group and a control group.

The experiment began with a priming phase designed to simulate the experience of being preoccupied. Participants were told they were taking part in a study on concentration. They viewed a series of slides on a computer screen that instructed them to imagine an activity or think about a specific topic.

In the repetitive thinking group, participants viewed ten slides. Seven of these slides instructed them to “think about food.” The remaining three slides offered unrelated instructions, such as imagining raking leaves or setting goals. Each slide remained on the screen for 35 seconds, forcing the participant to maintain their focus on the topic for a sustained period.

The control group also viewed ten slides with similar timing. However, only one slide instructed them to think about food. The other nine slides featured various unrelated prompts, such as imagining sitting in a chair, thinking about rain, or thinking about watches. This design ensured that the control group was exposed to the topic of food but did not engage in the repetitive rumination characteristic of a preoccupation.

Following the priming phase, the researchers administered a vigilance task. This is a standard method used in psychology to elicit and record involuntary memories. Participants watched a sequence of 92 slides. Each slide contained a pattern of horizontal or vertical lines with a short phrase embedded in the center, such as “hanging your clothes” or “growing a garden.”

The participants were given a simple, repetitive assignment to keep them occupied. They had to say “yes” out loud whenever they saw a slide with vertical lines. This type of low-attention task is known to encourage mind-wandering, which facilitates the emergence of spontaneous memories.

The researchers instructed the participants to ignore the text phrases on the slides but to pay attention to their own mental states. If they experienced a spontaneous thought or a specific memory during the task, they were to click a mouse button and write down what they experienced in a booklet.

Crucially, the vigilance task included specific cues designed to trigger food-related memories. Of the 89 non-practice slides, seven contained phrases directly related to food, such as “buying food,” “cooking dinner,” or “eating good food.” The remaining 82 slides contained neutral phrases unrelated to the priming topic.

After the task was completed, the participants reviewed their written entries. They categorized each entry as either a general thought or a specific memory. Two independent judges also reviewed the entries to determine if they were related to food.

The researchers found that the participants who engaged in repetitive thinking about food produced significantly more involuntary memories related to food than the control group. This outcome supports the hypothesis that preoccupation operates as a form of priming. By thinking about a subject repeatedly, the brain becomes sensitized to that information, making related past experiences more accessible to involuntary retrieval.

The study yielded another finding regarding the total number of memories produced. The repetitive thinking group reported a higher number of involuntary memories overall, regardless of whether the memories were about food. This suggests that the act of repetitive thinking might trigger a state of heightened memory accessibility.

The researchers suggest this increase in total memories may be due to “collateral priming.” This concept implies that when a specific network of memories is activated, such as memories about food, the activation spreads to other associated memories. For example, a memory about a dinner party might activate memories about the friends who were there or the location where it happened, even if those details are not strictly about food.

The study also compared the number of spontaneous thoughts that were not memories. The data showed no significant difference between the two groups regarding these non-memory thoughts. This indicates that the priming effect was specific to the autobiographical memory system and did not simply increase general thoughts about the topic.

These findings have implications for understanding how our daily mental habits shape our cognitive reality. The results suggest that the things we obsess over or worry about do more than just occupy our conscious attention. They actively recruit our past experiences, bringing related memories to the forefront of our minds.

“Your daily involuntary memories will track your thoughts and all the information that you process,” explained Mace, professor and chair of the Department of Psychology at Eastern Illinois University. “Most of the time, you will be unaware of the connection. If you a preoccupied with a particular idea (e.g., losing weight, a former partner), thinking about it a lot, this will influence your involuntary memories, in that many of them will feature the topic you are preoccupied with. You will make these connections, and this will not be a problem unless the preoccupations and memories are distressing.”

There are some limitations to this study that should be considered. The experiment focused on a single topic: food. While food is a common subject of daily thought, it is possible that other topics might yield different results.

Additionally, the duration of the repetitive thinking in the lab was relatively short, totaling about four minutes. Real-world preoccupations often last for days, weeks, or months. It is plausible that the effects observed in the laboratory would be even stronger in a natural setting where the repetition is more frequent and intense.

Future research in this area aims to explore different types of topics to see if the effect is universal. The scientists are also interested in examining how the frequency of the repetitive thought impacts the strength of the priming. Understanding these variables could provide deeper insight into how our internal monologues influence the way we remember our past.

The study, “Preoccupation priming: How repetitive thinking can influence our involuntary memories,” was authored by John H. Mace and Emily Chow.

The neuroscience of limerence and how to break the cycle of romantic obsession

19 February 2026 at 18:00

PsyPost’s PodWatch highlights interesting clips from recent podcasts related to psychology and neuroscience.

On Friday, January 30, the Sex and Psychology podcast, hosted by social psychologist Dr. Justin Lehmiller, featured Dr. Tom Bellamy. The episode explored the neurobiology of limerence, a state of intense romantic obsession, and examined strategies for breaking the cycle of unwanted attachment.

The conversation began by defining limerence not as a disorder, but as a biological trait that can be integrated into one’s emotional life. Bellamy explained that while the euphoric “fireworks” of new attachment are powerful, they typically fade within a few years. He noted that chasing this specific high often traps people in a cycle of serial monogamy, preventing the formation of stable, companionate love.

Later in the episode, the discussion shifted to the neurological similarities between limerence and addiction. Bellamy described a process where the brain’s dopamine-driven “wanting” system becomes sensitized, acting like an accelerator pedal pressed to the floor. Simultaneously, the prefrontal cortex—the area of the brain responsible for self-control and decision-making—becomes weakened, effectively releasing the brakes.

To counter this, Bellamy emphasized the need to strengthen executive function and “wake up the mental CEO.” He recommended mindfulness practices to interrupt subconscious habit loops, such as recognizing the urge to check a text message before acting on it. He also highlighted that foundational health habits, including proper sleep and exercise, create a “halo effect” that improves cognitive bandwidth for emotional regulation.

A more aggressive strategy involves “devaluing” the object of affection to break the cycle of idealization. Bellamy introduced the concept of the “daymare,” a technique where individuals deliberately alter their pleasant daydreams to include negative or rejecting endings. This approach uses negative conditioning to replace feelings of comfort with aversion.

Bellamy clarified that the purpose of this negative visualization is not to harbor permanent resentment. Instead, the aim is to accelerate the psychological process of extinction, where the brain stops expecting a reward from the person. The ultimate goal is to reach a state of neutrality, viewing the former partner realistically as an ordinary person with normal flaws.

You can listen to the full interview here.

What was Albert Einstein’s IQ?

19 February 2026 at 17:00

If you search the internet for the smartest people in history, one name appears more than any other. That name is Albert Einstein. His wild hair and expressive face have become the universal symbol for genius. But what was his IQ score?

Einstein was a theoretical physicist born in Germany in 1879. He is best known for developing the theory of relativity. This work fundamentally changed how humanity understands the universe.

Before Einstein, the laws of physics seemed set in stone. Isaac Newton had described a world of absolute time and space. Einstein challenged this view.

In 1905, often called his “miracle year,” he published four groundbreaking papers. One of these papers introduced the famous equation E=mc². This equation demonstrated that mass and energy are interchangeable.

He did not stop there. He went on to explain the photoelectric effect, which was a vital step toward quantum theory. This specific work won him the Nobel Prize in Physics.

His contributions led to technologies we use every day. Without his theories, we would not have GPS navigation or laser technology. He reshaped our concept of reality itself.

Because his achievements were so monumental, people naturally wonder about the mind that created them. We want to quantify his brilliance. We want to know if his brain was different from ours.

Understanding the Intelligence Quotient

To understand the rumors about Einstein’s score, we must first understand the test itself. IQ stands for Intelligence Quotient. It is a standardized score derived from a set of tests.

These tests are designed to assess human intelligence. The first modern intelligence test was developed in France in 1905. Psychologists Alfred Binet and Théodore Simon created it.

Their original goal was not to identify geniuses. Instead, they wanted to identify children who needed extra help in school. The test was a tool for education, not a measure of elite status.

Later, American psychologists adapted these tests for adults. The most famous early version was the Army Alpha test. It was created in 1917 to evaluate soldiers during World War I.

Modern tests, such as the Wechsler Adult Intelligence Scale, measure various cognitive abilities. They look at verbal comprehension and working memory. They also measure perceptual reasoning and processing speed.

The average score on these tests is set at 100. Most people score between 85 and 115. A score above 130 is typically considered “gifted.”

The maximum score on current tests often tops out around 160. This is the 99.9th percentile. This means a person with this score scores higher than almost everyone else in the general population.

The Missing Evidence

This brings us to the central question. Did Albert Einstein ever take an IQ test? According to a 2023 article by psychologist Russell T. Warne, the answer is almost certainly no. Warne asserts that there is no evidence Einstein ever sat for such an assessment.

Warne analyzes the timeline of Einstein’s life to support this conclusion. Einstein was born in 1879. He was already 26 years old when Binet created the first children’s test in 1905. He was an established adult by the time testing became common.

The first adult test, the Army Alpha, appeared in 1917. At that time, Einstein was 39 years old. He was living in Europe and was already a world-famous celebrity. Warne argues that Einstein had little to gain from taking an intelligence test.

It is unlikely that a physicist of his stature would have bothered with a psychometric evaluation. He was busy working on unified field theory. He was also navigating the political turmoil of Europe. There are no records in the Albert Einstein Archives or biographies that mention a test.

Where the Estimates Originate

If there is no record of a test, where does the number 160 come from? Warne conducted a search of historical publications to find the answer. He found that journalists and writers have been guessing Einstein’s IQ for nearly a century.

One of the earliest estimates appeared in a 1945 issue of Life magazine. The article profiled a 14-year-old prodigy named Merrill Kenneth Wolf. The magazine reported that Wolf had an IQ of 182. The article stated that this was “only 23 points lower than Einstein’s.”

This phrasing implies that the magazine believed Einstein’s IQ was 205. However, Life magazine was not consistent. In 1954, the same magazine published an article about another prodigy. This time, they estimated Einstein’s IQ at 192.

Other publications joined the guessing game. In 1962, Popular Mechanics stated that Einstein was estimated to have an IQ of 207. A 1974 book by Mariann Olden claimed his IQ was 205. 

Warne points out that the variation in these numbers is extreme. They range from 150 to over 200. This inconsistency suggests that the numbers are fabricated. There is no primary source. The number 160 appears to be a modern consensus among journalists, but it is not based on data.

Psychologists Weigh In

Academic experts are skeptical of these numbers. In a 2020 article for Biography.com, Dean Keith Simonton weighed in on the issue. Simonton is a professor emeritus of psychology at the University of California, Davis.

Simonton warns that these estimates often confuse two different things. They conflate intellectual ability with domain-specific achievement. Einstein was the greatest theoretical physicist of his time. This means he was exceptional in physics.

However, general intelligence tests measure a wide range of skills. They test vocabulary, pattern recognition, and memory. Being a genius in physics does not guarantee a perfect score in every other area. Simonton suggests that if you look at Einstein’s early development, his raw IQ might not have been as striking as his physics work.

Jonathan Wai, a professor at the University of Arkansas, offers a different perspective in the same Biography.com article. Wai notes that people who earn PhDs in physics typically have extremely high IQs.

Wai points to Einstein’s famous thought experiments. As a teenager, Einstein imagined chasing a beam of light. This required intense spatial visualization. Wai argues that this suggests Einstein was highly talented in spatial reasoning.

Wai believes that if Einstein had been tested, he would have scored well above average. This is consistent with data on other physicists. However, this is still a prediction, not a confirmed score.

The Biological Evidence

While we lack a test score, we do have biological evidence. We have Einstein’s brain. When Einstein died in 1955, a pathologist named Thomas Harvey performed the autopsy. Harvey removed the brain for scientific study.

In 1999, a team of researchers published a landmark study in The Lancet. The team was led by Sandra F. Witelson and Debra L. Kigar. They worked with Thomas Harvey to analyze the anatomy of the brain.

The researchers compared Einstein’s brain to a control group. This group consisted of 35 brains from men with normal intelligence. The men in the control group had an average IQ of 116.

The study revealed something surprising about brain size. Many people assume that a genius must have a massive brain. However, Einstein’s brain weighed 1,230 grams. This was not statistically different from the control group.

In fact, his brain was slightly lighter than the average for the men in the study. This finding is significant. It proves that total brain weight is not the primary factor in exceptional intelligence. A heavy brain does not automatically equal a smart mind.

Unique Brain Architecture

Although the weight was normal, the structure was not. Witelson and her colleagues found unique features in the parietal lobes. The parietal lobes are the part of the brain responsible for processing sensory information.

This region handles visuospatial cognition and mathematical thinking. The researchers measured the width of Einstein’s brain. They found that his parietal lobes were 15 percent wider than those of the control group.

This extra width gave his brain a more spherical shape than a typical human brain. The researchers also discovered a unique feature on the surface of the brain. The brain has deep folds and grooves. One major groove is called the Sylvian fissure.

In a normal brain, the Sylvian fissure runs deep and meets a structure called the parietal operculum. The study found that Einstein lacked a parietal operculum in both hemispheres.

Because this structure was missing, the Sylvian fissure did not run as far as usual. It merged with another groove called the postcentral sulcus. This was a unique anatomical variation. The researchers did not see this in any of the control brains.

The Functional Impact

The researchers in The Lancet study proposed a theory about this anatomy. They suggested that the absence of the parietal operculum allowed the inferior parietal lobule to expand. This is a specific area within the parietal lobe.

The scientists hypothesized that this expansion allowed for better connections between neurons. Without the usual groove separating the area, the brain cells could communicate more efficiently. This creates a highly integrated network for visual and spatial thinking.

This biological finding aligns with how Einstein described his own mind. He often stated that words were not significant in his thought process. Instead, he thought in signs and images.

He visualized complex physical problems. His theory of relativity came from visualizing moving bodies and light. The researchers concluded that his unique parietal anatomy likely supported this specific type of reasoning.

The Threshold of Intelligence

The biological evidence tells us Einstein was unique. However, it does not confirm a specific IQ number. This leads to a broader discussion about the value of IQ scores.

In his book Outliers, Malcolm Gladwell discusses the relationship between IQ and success. He compares Einstein to a man named Christopher Langan. Langan appeared on the TV show 1 vs. 100. The show claimed Langan had an IQ of 195.

If we accept the common estimate of 160 for Einstein, then Langan’s score is significantly higher. By strict numerical logic, Langan should be “smarter.” Yet, Einstein is the one who revolutionized science.

Gladwell uses this comparison to introduce the “threshold theory.” He argues that intelligence matters up to a point. You have to be smart enough to handle complex ideas. But once you cross that threshold, a higher score does not guarantee more success.

Gladwell supports this by looking at Nobel Prize winners. He lists the colleges attended by the last 25 American winners in medicine. The list includes elite schools like Harvard and Yale. But it also includes schools like Holy Cross, Gettysburg College, and the University of Illinois.

These are good schools, but they are not all exclusive Ivy League institutions. Gladwell argues that a Nobel Prize winner does not need to have the highest IQ in the world. They just need to be smart enough to get into a decent university.

Once a person is “smart enough,” other factors take over. Creativity, persistence, and a willingness to question authority become essential. Einstein possessed these traits in abundance.

Why We Obsess Over the Number

Robert B. McCall, a professor emeritus at the University of Pittsburgh, questioned the value of these estimates in his interview with Biography.com. He stated that he does not see the value in trying to calculate Einstein’s IQ.

McCall argues that famous people are famous for their actions. We should celebrate those actions. Their contributions are only modestly related to a test score. A person can be accomplished in ways that an IQ test cannot measure.

The obsession with the number 160 reveals more about society than it does about Einstein. We want to believe that intelligence is a single, measurable trait. We want to rank people on a scoreboard.

Assigning a score of 160 to Einstein gives us a reference point. It makes the concept of “genius” feel tangible. However, it is an oversimplification. It ignores the specific nature of his mind.

Evolutionary psychology is unfalsifiable? New scientific paper aims to kill this “zombie idea”

Evolutionary psychology hypotheses can be rigorously tested, and sometimes decisively overturned, challenging the long-standing claim that the field is inherently unfalsifiable, according to a conceptual review published in American Psychologist.

Since the 1970s, critics have contended that evolutionary explanations of human behavior amount to “just-so stories,” plausible but empirically untestable narratives flexible enough to accommodate virtually any outcome.

Drawing on Popper’s philosophy of science, these critiques claim that evolutionary psychology fails the criterion of falsifiability and therefore lacks scientific rigor, a perception that has persisted both within academia and public discourse.

William Costello, a doctoral researcher at the University of Texas at Austin explains, “As a graduate student preparing to go on the job market I am passionate about correcting the many misconceptions about evolutionary psychology that pervade academia and cultural consciousness. Evolutionary psychology is enormously explanatorily powerful for a wide range of domains, so it is frustrating to constantly have to contend with the decades old ‘zombie idea’ that its hypotheses are unfalsifiable. This false perception may also prevent younger scholars from embracing the framework in their own work, so hopefully our paper can empower them to push back against uncharged criticisms when they face them.”

The article takes up that challenge by clarifying what falsifiability requires and by examining how evolutionary psychology constructs and evaluates its hypotheses.

The authors begin by specifying formal criteria for falsifiability: a hypothesis must generate explicit, prohibitive predictions that could, in principle, be contradicted by observable evidence. Vague or underspecified claims can evade disconfirmation, but the authors argue that this is a problem of imprecision, not a defining feature of evolutionary psychology.

They then situate evolutionary psychology within a Lakatosian research program structure. At the top sits evolutionary theory as a metatheoretical foundation; below it are middle-level theories (such as parental investment theory); and at the lowest level are specific hypotheses that generate concrete predictions. It is at this level that falsification operates. By distinguishing among these tiers, the authors argue that critics often mistake broad theoretical commitments for unfalsifiable claims, when in fact it is the lower-level predictions that are directly tested and, at times, rejected.

To demonstrate falsifiability in action, the authors review three prominent hypotheses that have been substantially weakened or refuted. First, the ovulatory shift (dual-mating) hypothesis predicted that women’s mate preferences would reliably shift toward traits signaling “good genes” during ovulation. Although early studies appeared supportive, larger and more rigorous replication efforts largely failed to confirm consistent fertility-linked preference shifts. The core prediction has not proven robust.

Second, the mate deprivation hypothesis of rape proposed that men lacking mating opportunities would be more likely to commit sexual violence. Empirical tests found the opposite pattern: men with greater mating success and higher status were more likely to report coercive behavior. These findings directly contradict the hypothesis’ central prediction.

Third, the kin altruism hypothesis for male homosexuality suggested that same-sex-attracted men would offset reduced direct reproduction by investing heavily in genetic relatives. Cross-cultural research has yielded mixed or negative evidence, and the level of kin investment observed does not appear sufficient to satisfy inclusive fitness requirements. As a result, the hypothesis lacks strong empirical support.

Alongside these refuted cases, the authors emphasize that many other evolutionary psychological hypotheses, such as those concerning parental investment, jealousy, disgust, and kin-directed altruism, have generated precise predictions that have received substantial empirical backing. The coexistence of confirmed and disconfirmed hypotheses, they argue, is exactly what one would expect in a progressive scientific field.

Reflecting on broader lessons, Costello noted: “There are many other leaders in the field (e.g., Ed Hagen) who have already tackled this problem well in other work. It would be nice to think that our article would be the final word and resolve the matter once and for all, but I think that because there are so many who are ideologically motivated to dismiss evolutionary psychology, scholars will need to defend against this misconception in each generation. We need to be prepared to do so and not allow misconceptions to flourish. There are those who think that we should not bother defensively correcting misconceptions and instead just focus on improving our field. I think we can and should do both.”

He added, “I think it’s good for scholars to have contemporary theoretical work in a leading psychology journal to now point to when they hear the myth espoused in academic or public discourse.”

“Evolutionary psychology is by no means immune to poor hypothesizing and we should always reflect on helping scholars to formulate their hypotheses with sufficient precision that they garner evidence for or against the hypothesized design features of a psychological mechanism,” explained Costello.

“Previous generations of our lab, led by David Lewis (who has been an amazing mentor to me) have taken a very proactive approach on this front. They published a terrific paper in American Psychologist called Evolutionary Psychology: A How to Guide. I encourage readers to read that article too.”

By documenting hypotheses that have been directly contradicted by empirical findings, the article argues that evolutionary psychology is not immune to disconfirmation but instead operates as a research program capable of generating testable (and falsifiable) claims.

The researcher shared that future work could examine whether academic and public perceptions of unfalsifiability have shifted since earlier surveys, and whether interventions such as reading the present article or taking an evolutionary psychology course change minds.

“I was pleased that the article was chosen as the APA Editor’s choice, which means it will be available to read ‘open access’ for 30 days since publication so please go and read it,” Costello told PsyPost. “Or reach out to me to get your hands on a PDF if you can’t gain access to it.” 

“Also, the article was published alongside two commentaries, who both agreed with our core argument that evolutionary psychology hypotheses are indeed falsifiable. Our reply gave us the opportunity to speak to some of evolutionary psychology’s other theoretical strengths (e.g., its heuristic value). That’s titled Beyond Falsifiability: Evolutionary Psychology’s Many Theoretical Strengths: Reply to Geary (2026) and Moore (2026) and I encourage people to read those also.”

William Costello is a doctoral researcher of Evolutionary Psychology at the University of Texas at Austin working under the supervision of Dr. David Buss. You can follow his work on ResearchGate, Google Scholar or on social media at X: @CostelloWilliam or BlueSky: @williamcostello.bsky.social

The research, “Evolutionary Psychology Hypotheses Are Testable and Falsifiable,” was authored by William Costello, Anna G. B. Sedlacek, Patrick K. Durkee, Courtney L. Crosby, Rebecka K. Hahnel-Peeters, and David M. Buss.

Neuroscientists identify a unique feature in the brain’s wiring that predicts sudden epiphanies

19 February 2026 at 15:00

New research published in BMC Psychology suggests that the structural wiring of the brain may play a significant role in how people solve problems through sudden insight. The study indicates that individuals who frequently experience “Aha!” moments tend to have less organized white matter pathways in specific language-processing areas of the left hemisphere. These findings imply that a slightly less rigid neural structure might allow the brain to relax its focus, enabling the unique connections required for creative breakthroughs.

For decades, scientists have studied the phenomenon of insight, which occurs when a solution to a problem enters awareness suddenly and unexpectedly. This is often contrasted with analytical problem solving, which involves a deliberate and continuous step-by-step approach.

While previous studies using functional MRI and EEG have mapped the brain activity that occurs during these moments, there has been little understanding of the underlying physical structure that supports them. The researchers behind the new study aimed to determine if stable differences in white matter—the bundles of nerve fibers that connect different brain regions—predict an individual’s tendency to solve problems via insight.

“For over two decades, neuroscience has mapped what happens in the brain during these moments using EEG and fMRI. We know from prior research that insight feels sudden, tends to be accurate, and involves distinct functional activation patterns — including a burst of activity in the right temporal cortex just before the solution reaches awareness,” said study authors Carola Salvi of the Cattolica University of Milan and Simone A. Luchini of Pennsylvania State University.

“But one major question remained open: what structural features of the brain might make some people more likely to experience insight in the first place?”

“Most previous white matter studies of creativity did not specifically focus on Aha! experiences. They measured how many problems people solved, or how creatively, not how they solved them (with or without these sudden epiphanies). Yet insight and non insight solutions are phenomenologically and neurally distinct processes.”

White matter acts as the communication infrastructure of the brain, transmitting signals between distant regions. To examine this structure, the researchers employed a technique called Diffusion Tensor Imaging (DTI). This method tracks the movement of water molecules within brain tissue.

“We wanted to know whether stable white matter microstructure — the brain’s anatomical wiring — differs depending on whether someone tends to solve problems through sudden insight or through deliberate step-by-step reasoning (non insight solutions),” Salvi and Luchini explained. “Diffusion tensor imaging (DTI) allowed us to examine this structural dimension directly.”

In healthy white matter, water tends to move along the direction of the nerve fibers, a property known as fractional anisotropy (FA). High FA values generally indicate highly organized, dense, and well-insulated fibers, which are typically associated with efficient signal transmission and strong cognitive performance.

The study involved 38 distinct participants, after excluding those who did not meet specific criteria or failed to complete the task correctly. These participants engaged in a standard test used to measure creative potential known as the Compound Remote Associates (CRA) task. In this activity, individuals viewed three words, such as “crab,” “pine,” and “sauce,” and were asked to find a fourth word that forms a common phrase with all three, in this case, “apple.”

After each successful solution, participants reported whether they arrived at the answer through a step-by-step analysis or a sudden insight. This self-reporting method allowed the scientists to quantify an “insight propensity” for each person. The researchers then analyzed the DTI scans to see how white matter integrity correlated with this propensity, controlling for variables such as age and gender.

The findings offered a counterintuitive perspective on brain connectivity. The analysis revealed that participants who solved more problems via insight exhibited lower fractional anisotropy in the left hemisphere’s dorsal language network. This network includes the arcuate fasciculus and the superior longitudinal fasciculus, pathways that connect brain regions responsible for language production, comprehension, and semantic processing.

“One striking finding was that people who more frequently experienced insight showed lower fractional anisotropy in specific left-hemisphere dorsal language pathways, including parts of the arcuate fasciculus and superior longitudinal fasciculus,” Salvi and Luchini told PsyPost.

“At first glance, that might sound counterintuitive. Fractional anisotropy is often interpreted as reflecting the coherence or organization of white matter pathways. In many cognitive domains, higher fractional anisotropy is associated with better performance.”

“But insight may operate differently. The left hemisphere is typically involved in focused, fine-grained semantic processing — narrowing in on dominant interpretations of words and concepts. The right hemisphere, by contrast, is thought to support broader, ‘coarse’ semantic coding — integrating more distantly related ideas. Slightly lower fractional anisotropy in left dorsal language pathways may reflect a system that is less tightly constrained by dominant interpretations.

“In other words, it may allow a partial ‘release’ from habitual patterns of thought and it is in line with other studies where lesions in the left frontotemporal regions have been shown to increase artistic creativity,” Salvi and Luchini continued. “Taken together, these findings imply that left hemispheric regions play a regulatory role in creativity and that their disruption lifts this constraint, thus promoting novel ideas.”

“That release effect is fascinating. In simple words It suggests that creativity sometimes emerges not from strengthening control, but from relaxing it just enough to let weaker, more remote associations surface. When the brain is less locked into its most obvious interpretations, it may be more capable of restructuring the problem — and that restructuring is the heart of an Aha! moment.”

It is worth noting that no significant structural associations were found for the step-by-step analytical problem solving style. This suggests that the neural architecture supporting insight is distinct and specific. Analytical solving may rely on dynamic brain activity rather than the stable structural traits identified for insight.

This concept of sudden recognition is being explored in other sensory domains as well. A separate study recently conducted by researchers at NYU Langone Health examined “one-shot learning,” which is the visual equivalent of an “Aha!” moment.

In that study, participants viewed blurred images that became recognizable only after seeing a clear version. The NYU team found that the high-level visual cortex stores “priors,” or memory templates, which the brain accesses to suddenly make sense of ambiguous visual information.

While the NYU study focused on visual perception and the current study focused on linguistic creativity, both highlight a similar cognitive phenomenon: the brain’s ability to reorganize information suddenly to form a coherent whole. The NYU findings suggest this happens through accessing stored memory templates, while the current study suggests that linguistic insight relies on structural flexibility that permits distant connections to surface.

There are some limitations to the current study that warrant mention. The sample size of 38 participants is relatively small, though it is typical for technically intensive DTI studies. Additionally, the study establishes a correlation but does not prove causation. It remains unclear whether people are born with this structural connectivity or if engaging in creative thinking alters the white matter over time. Demographic factors such as education level were also noted as potential influences on white matter integrity.

Future research will likely focus on larger and more diverse groups to verify these results. Scientists may also attempt to combine structural imaging with functional tracking to see how these white matter highways are utilized in real-time during the moment of insight. By understanding the physical architecture of creativity, science moves closer to demystifying how the human brain generates novel ideas.

“In many areas of cognition, greater microstructural organization (as indexed by higher fractional anisotropy) is associated with stronger performance. Here, greater insight propensity was linked to lower fractional anisotropy in specific left dorsal pathways,” the researchers added.

“This challenges a simple ‘more organized white matter equals better cognition’ view. Instead, it suggests that the neural architecture supporting insight may involve a delicate balance between constraint and flexibility. Too much structural rigidity could reinforce dominant interpretations. A slightly less constrained system may allow the mind to wander just far enough to discover something unexpected. That idea — that brilliance can emerge from loosening control rather than tightening it — is both scientifically intriguing and deeply human.”

The study, “The white matter of Aha! moments,” was authored by Carola Salvi, Simone A. Luchini, Franco Pestilli, Sandra Hanekamp, Todd Parrish, Mark Beeman, and Jordan Grafman.

Video games may offer small attention benefits for children with ADHD

19 February 2026 at 01:00

New analyses regarding digital health interventions suggest that specially designed video games may offer a small benefit in improving attention symptoms for children with certain neurodevelopmental conditions. While the findings indicate a positive outcome in a research setting, the improvements were not large enough to be considered a standalone cure. These results were recently published in the journal Psychiatry Research.

Attention-Deficit/Hyperactivity Disorder, or ADHD, is a widespread condition that often manifests in children as difficulty sustaining focus or regulating impulses. This inattention is thought to stem from underlying differences in brain function related to neurotransmitter systems.

Standard treatments usually involve stimulant or non-stimulant medications, which can be highly effective for many children in managing core symptoms. However, these pharmaceutical options sometimes carry unwanted side effects, such as sleep difficulties or reduced appetite, prompting families and clinicians to search for additional approaches.

Over the past decade, various researchers have proposed digital interventions as a potential avenue for therapy. The underlying theory posits that certain video games designed to engage specific cognitive networks might stimulate brain activity in areas associated with attention.

Pengwei Ma, affiliated with Southwest University in China, aimed to evaluate the collective quality and consistency of the evidence regarding these digital therapeutics. Ma and the research team recognized that while individual experiments existed, their results were sometimes inconsistent or limited by small participant numbers.

To address this uncertainty with greater statistical power, the investigators conducted a systematic review and meta-analysis. This approach essentially functions as a “study of studies.” Instead of running a new clinical trial with patients, the team comprehensively searched major scientific databases to locate existing, high-quality research papers. By pooling data from multiple smaller projects, researchers can sometimes detect subtle effects that might be missed in an individual trial with fewer participants.

The researchers specifically looked for randomized controlled trials, which are generally considered the gold standard for evaluating medical interventions. The analysis was narrowed to include only studies focusing on children aged twelve and younger who had received a formal clinical diagnosis of ADHD. The search ultimately identified ten reputable trials that met strict inclusion criteria, encompassing data from a total of 820 participants across different countries.

By combining the numerical outcomes from these ten separate trials, the investigators calculated an overall statistical measure known as an “effect size.” This number indicates the magnitude of the difference between groups that used the video game interventions and control groups that did not.

The combined analysis revealed that children who used the targeted video games experienced a measurable improvement in attention deficits compared to their peers. Statistical tests confirmed that this positive result was likely a genuine effect of the intervention rather than a chance occurrence.

It is important for a non-expert audience to contextualize the magnitude of this improvement. While the effect was statistically detectable, the researchers characterized the benefit as not biologically strong enough to be clinically meaningful on its own. To put this in perspective, medical researchers use specific numerical ranges to define how well a treatment works in a practical sense. Standard stimulant interventions for ADHD typically show a moderate to strong effect size in similar analyses.

The pooled effect size for the video game interventions fell into a range that scientists classify as small. This distinction is vital for parents and clinicians to understand when considering treatment options. A measurable change in a controlled research setting does not always translate to a major transformation in a child’s daily life skills or academic performance.

The analysis suggests that while digital interventions have a verifiable positive impact, they are not currently powerful enough to replace existing first-line treatments like medication or behavioral therapy.

The authors noted several aspects of the available data that require cautious interpretation. The review was limited to studies published in English and Chinese, potentially missing relevant research conducted in other languages. Furthermore, some of the included trials did not fully report methodological details, such as precisely how they ensured researchers remained unaware of which children were assigned to the treatment or control groups.

The review also highlighted variables that might influence how well these digital therapies work in future applications. There were indications that interventions lasting eight weeks or longer might be more effective than programs with shorter durations. Additionally, the researchers observed that video games incorporating physical exercise seemed to yield better results than sedentary cognitive games. Ma and colleagues suggested that future inquiries should investigate combining video game therapy with physical activity to potentially enhance therapeutic outcomes.

The ultimate conclusion drawn by the paper is one of cautious optimism. The findings support the idea that video games “may be therapeutic when added to other evidence-based therapies.” They appear best suited as a complementary tool within a broader treatment plan rather than a solitary solution for attention deficits in children.

The study, “Effects of video game intervention on attention deficit in children with ADHD: A systematic review and meta-analysis,” was authored by Pengwei Ma, Zhuolin Xue, Kun Yuan, Peiyun Zheng, Junfeng Li, and Jindong Chang.

Rising number of Americans report owning firearms for protection at public political events

18 February 2026 at 23:00

New research published in the journal Injury Epidemiology highlights a shift in the motivations behind gun ownership in the United States. Following the 2024 presidential election, fewer gun owners reported possessing firearms to advance political objectives. However, a growing number of owners, particularly Republicans, cited the need for protection at political rallies and protests as a primary reason for owning a gun.

The landscape of gun ownership in America has evolved substantially over the past few decades. “Over time, we’ve been noticing shifts in Americans’ reasons for owning guns,” said study author Julie A. Ward, an assistant professor at Vanderbilt University.

“It used to be that if you asked gun owners why they own a gun or guns, the main reason you would hear was ‘for hunting’. Over time, hunting has stayed an important reason for many gun owners, but we’ve also seen growth in other reasons. Now, for example, “protection from other people” or for potential use in political or ideological conflict is increasingly common.”

“In a nationally representative survey we fielded in 2023, we saw that 85% of newer gun owners (meaning, people who purchased their first gun since 2020) said that at least one political violence related reason for gun ownership was personally important to them.”

“Roughly 60% of these newer gun owners cited defensive reasons (meaning, to protect themselves from political violence) and a similar portion cited assertive reasons (meaning, to advance an important political objective of their own). These proportions were nearly double what we saw among longer-term gun owners that year.”

“Knowing this – we wondered what changes we might see in Americans’ reasons for gun ownership two years later – following these hints of potential growth in owning guns for use in political conflict and on the heels of a 2024 US Presidential election that involved very high levels of political aggression and violence,” Ward explained.

“These are very real-world questions that we were trying to answer: What reasons do US gun owners give for their personal gun ownership? And, have those reasons changed since 2023 – either overall or by political affiliation? Understanding gun owners’ interests and concerns in this time of escalating tensions is critical for figuring out what we need to do to keep people safe.”

To investigate this, the researchers analyzed data from the National Survey of Gun Policy. This is a recurring survey that tracks public opinion on firearms and related policies. The study utilized two specific waves of the survey. The first wave was collected in January and February of 2023. The second wave was collected in January 2025, shortly before the presidential inauguration.

The total sample consisted of 2,003 adults who personally owned firearms. The participants were split evenly between the 2023 and 2025 groups. To ensure the findings applied to the general public, the researchers used statistical weighting. This is a method that adjusts the survey data so that the demographics of the respondents match the age, race, and gender makeup of the entire country.

Participants in the study were presented with a list of ten potential reasons for owning a gun. They were asked to rate how important each reason was to them personally. The options covered a wide range of motivations. These included traditional reasons like hunting or recreational target shooting.

The list also included specific questions regarding political violence. For example, participants were asked if they owned a gun “for protection at demonstrations, rallies, or protests.” Another option asked if they owned a gun “to advance an important political objective.” This phrasing implies using the firearm as a tool to force a political outcome rather than just for safety.

The researchers found a notable decline in the number of people owning guns for offensive political purposes. In 2023, roughly 35 percent of gun owners said that advancing a political objective was an important reason for ownership. By 2025, that number had dropped significantly to 22 percent.

“We found that as political violence escalated nationally, large majorities of Democrat, Independent, and Republican gun owners were rejecting such violence,” Ward told PsyPost. “Compared with responses in 2023, in 2025, we saw significantly fewer gun owners endorsing gun ownership to ‘advance an important political objective’ across each of these political groups.”

In contrast, the researchers observed a rise in gun ownership motivated by a desire for protection in political spaces. In 2025, 42 percent of all gun owners said protection at demonstrations or rallies was an important reason for ownership. This was an increase from 35 percent in 2023.

This shift was largely driven by Republican gun owners. The data showed that 51 percent of Republican respondents in 2025 cited protection at rallies as a key reason for owning a gun. This was a significant jump from 40 percent in 2023.

“This tells us there is growing concern among gun owners for personal safety in spaces that are used for political speech,” Ward said. “The problem is, even when motivated by defensive interests, increased gun carrying doesn’t reduce a population’s risk for gun-related harms – it increases it.”

“It is especially urgent that policymakers act on these safety concerns. For example, policies that regulate gun carrying in sensitive spaces are a strategy that can protect First and Second Amendment rights at the same time. For public safety and for democracy, it is critical that that people not only feel safe – but actually also are safe – when they are exercising their right to free speech.”

Republicans also reported increases in other protective motivations. In 2025, 97 percent of Republican gun owners cited home protection as an important reason, up from 93 percent. Additionally, concern regarding police violence increased within this group. About 34 percent of Republicans cited protection against police as a reason for ownership in 2025, compared to 25 percent in 2023.

The researchers also found a resurgence in hunting as a motivation. Among all gun owners, 81 percent listed hunting as important in 2025. This was an increase from 74 percent in the previous survey.

“We think there may be some interesting explanations for why we see these differences happening together,” Ward told PsyPost. “One may relate to marketing – linking consumerism (related to buying more or different guns) to threat messaging and to personal identities can be a powerful way to increase gun sales. And, those types of messages don’t come at us randomly. Social media and newsfeed algorithms can use political identity to shape the messages we see. How those exposures shape how we feel about our own gun ownership could be an important direction for future research.”

There are some limitations to this study that should be considered. The research compared two different groups of people at two different times. It did not track the same individuals over the two-year period. This means the study describes changes in the overall population, but it cannot pinpoint if specific individuals changed their minds.

“The results we report are both statistically significant and practically significant,” Ward noted. “Many of these differences were double-digit percentage point shifts. It’s also important to note that the way this large survey was designed means these results are representative of the population of gun owners nationwide.”

There is also the potential for social desirability bias. This is a phenomenon where survey takers give answers they believe are socially acceptable, rather than what they truly feel. However, the anonymous nature of the survey helps to reduce this likelihood.

Future research could examine the underlying causes of these shifts. Scientists suggest investigating how media consumption and political marketing influence fears of victimization. Understanding why specific groups feel unsafe at political events could help policymakers design better security measures.

The study, “Gun ownership for political protection or armed political expression: a nationally representative analysis of differences in 2025 vs. 2023,” was authored by Julie A. Ward, Rebecca A. Valek, Vanya C. Jones, Lilliana Mason, and Cassandra K. Crifasi.

High IQ men tend to be less conservative than their average peers, study finds

18 February 2026 at 21:00

The stereotype of the eccentric genius with radical political views is a common trope in fiction. A new study challenges this assumption by suggesting that highly intelligent adults may hold political views that are remarkably similar to the general population. Researchers found that adults identified as gifted in childhood largely share the same political outlooks as their non-gifted peers, with one specific exception regarding conservatism in men. These findings were published in the scientific journal Intelligence.

Society often looks to gifted individuals to solve major problems. These individuals frequently occupy leadership roles in economics, science, and politics. Because they hold positions of influence, understanding how they view the world is a matter of public interest.

Researchers have spent decades trying to understand the link between cognitive ability and political belief. Some past theories suggested that higher intelligence leads to left-wing or liberal views. Other theories proposed that intelligent people might favor economic conservatism.

The results of these past studies have been inconsistent. This inconsistency led a team of researchers to investigate the matter using a long-term approach. They wanted to see if distinct political patterns emerge when comparing gifted adults to a control group of average intelligence.

The lead author of the study is Maximilian Krolo from the Department of Educational Science at Saarland University in Germany. He collaborated with Jörn R. Sparfeldt, also from Saarland University, and Detlef H. Rost from the Department of Psychology at Philipps-University Marburg.

The team based their research on the “Cognitive Complexity-Openness Hypothesis.” This concept suggests that people with higher intelligence are generally more open to new experiences. They are also thought to be better equipped to handle complex or nuanced ideas.

If this hypothesis holds true, gifted individuals might reject rigid political dogmas. They might gravitate toward more flexible or moderate positions. The researchers aimed to test if this theoretical flexibility translates into specific political preferences in adulthood.

To do this, the authors utilized data from the Marburg Giftedness Project. This is a longitudinal study based in Germany that tracks the development of individuals over time. The project began during the 1987-1988 school year.

The initial phase involved examining over 7,000 third-grade students. The researchers administered standardized intelligence tests to this large group. These tests measured reasoning abilities and the speed at which the students processed information.

From this large pool, the team identified a group of gifted students. These students had an Intelligence Quotient (IQ) of 130 or higher. In the general population, an IQ of 100 is considered average.

The researchers then selected a control group of non-gifted students. This group had IQ scores near 100. The researchers ensured this control group matched the gifted group in other ways, such as gender ratios and socioeconomic background.

This matching process was designed to ensure fair comparisons. It allows researchers to be more confident that any differences found later are actually due to intelligence differences.

Six years later, when the students were in the ninth grade, the team tested them again. This re-evaluation confirmed the cognitive status of the participants. It ensured that the classification of “gifted” or “non-gifted” remained accurate as the children entered adolescence.

The current study focuses on these same individuals roughly 35 years after they were first identified. The participants were now adults with an average age of about 43. The researchers sent them surveys to assess their political orientations.

A total of 87 gifted adults and 71 non-gifted adults completed the survey. The response rate was notably high for a study spanning so many decades. This level of participation helps strengthen the reliability of the data.

The survey measured political views in two different ways. The first method was a simple single-dimensional scale. Participants were asked to place themselves on a spectrum ranging from left (1) to right (10).

The second method was more detailed. The researchers used the “Political Ideologies Questionnaire” to measure four distinct dimensions of political thought. These dimensions allowed for a more precise understanding of specific beliefs.

The first dimension was economic libertarianism. This viewpoint emphasizes free markets and individual liberty in economic matters. People who score high here often view merit-based inequality as fair.

The second dimension was conservatism. This outlook values tradition and social stability. High scorers usually believe that shared culture and established rules are necessary to prevent societal fragmentation.

The third dimension was socialism. This perspective focuses on equality of outcome. It emphasizes protecting disadvantaged groups and may advocate for social changes to reduce exploitation.

The fourth dimension was liberalism. In this context, liberalism refers to placing a high value on individual autonomy. It suggests that people should be free to live as they please provided they do not harm others.

The researchers analyzed the survey data using statistical methods called Analysis of Variance (ANOVA). They checked for differences between the gifted and non-gifted groups. They also looked for differences based on sex.

On the simple left-right scale, the results showed no statistical difference between the two groups. Both the gifted and non-gifted adults tended to place themselves near the center of the spectrum. This suggests a general tendency toward moderation in both groups.

The researchers then analyzed the four specific dimensions of the detailed questionnaire. For economic libertarianism, socialism, and liberalism, the analysis again showed no statistical difference between the groups. Giftedness did not appear to push individuals toward or away from these specific ideologies.

However, a distinct pattern emerged regarding the dimension of conservatism. The researchers found an interaction effect between giftedness and sex. This means the relationship between intelligence and conservatism depended on whether the participant was male or female.

Specifically, non-gifted men scored higher on conservatism than gifted men. The non-gifted men were more likely to endorse values related to tradition and strict social order. Gifted men were less likely to hold these traditional conservative views.

This difference was not observed among the women in the study. Gifted women and non-gifted women showed similar levels of conservatism. The divergence was unique to the male participants.

The researchers used supplementary Bayesian analyses to verify these results. Bayesian analysis is a statistical technique that weighs the strength of evidence for different models. These additional tests supported the initial findings.

The team interpreted the findings through the lens of cognitive flexibility. They suggest that non-gifted men might rely more on traditional perspectives when processing complex social issues. This reliance could lead to higher conservatism scores.

On the other hand, gifted men may possess greater cognitive flexibility. This allows them to process diverse perspectives more easily. Consequently, they may be less inclined to adhere to rigid traditional norms.

The lack of difference in the other categories supports the “centering” hypothesis. This is the idea that intelligent individuals often avoid extreme political positions. They may see extreme views as oversimplifications of a complex reality.

The authors also noted that the German political context might play a role. Germany has a “social market economy” that blends capitalism with social welfare. This cultural environment might encourage a consensus around moderate views for everyone, regardless of intelligence.

As with all research, there are limitations to the study that must be considered. The sample size was relatively small, which is common in studies that last for decades. A larger sample might have detected smaller effects that this study missed.

Additionally, the study was conducted exclusively in Germany. Political terms like “liberal” or “conservative” can have different meanings in different countries. The results might not apply perfectly to the political landscape of the United States or other nations.

The study also relied on self-reported beliefs. While honest reporting is assumed, people sometimes describe themselves differently than their actions might suggest.

Future research could address these limitations by looking at actual behavior. For instance, scientists could examine voting records or party memberships. This would help determine if these internal orientations translate into real-world political action.

Despite the limitations, the study offers a clear message. High intelligence does not automatically lead to radical or distinct political views. Gifted adults appear to be as politically diverse and moderate as the rest of the population.

The one notable exception regarding male conservatism warrants further investigation. It highlights how intelligence and gender might interact to shape how people value tradition.

Ultimately, this research suggests that while gifted individuals may process information differently, their political conclusions are not fundamentally alien. They navigate the same societal debates as everyone else. Their minds may be exceptional, but their politics are often quite ordinary.

The study, “Exploring exceptional minds: Political orientations of gifted adults,” was authored by Maximilian Krolo, Jörn R. Sparfeldt, and Detlef H. Rost.

Study finds a disconnect between brain activity and feelings in lonely people

18 February 2026 at 19:00

Loneliness acts as more than a fleeting emotional state; it functions as a persistent filter that alters how the human brain processes the social world. New research published in the journal Biological Psychology provides evidence that this condition changes the neural mechanisms responsible for evaluating threats and regulating emotions.

The study demonstrates that applying a mild, targeted electrical current to the frontal lobe can help lonely individuals perceive negative social scenes as less distressing. These findings offer a new perspective on the disconnect between how lonely people react to their environment physiologically and how they consciously perceive those reactions.

Social isolation is widely recognized as a risk factor for a variety of physical and mental health issues. These range from increased susceptibility to cardiovascular disease to a higher likelihood of developing neurodegenerative disorders. Psychologists have long sought to understand the cognitive machinery that drives these negative outcomes. One prominent framework is the Evolutionary Theory of Loneliness. This theory suggests that isolation triggers a state of hypervigilance. The lonely brain becomes obsessively tuned to social signals in an effort to reconnect with others.

This constant scanning for social cues can lead to a depletion of cognitive resources. When the brain is busy monitoring for threats, it may have less capacity remaining to manage or regulate emotional responses. Szymon Mąka and his colleagues at the Institute of Psychology within the Polish Academy of Sciences designed a study to test these theoretical mechanisms. Mąka and senior author Łukasz Okruszek had previously noted a paradox in their research. They observed that lonely individuals often display strong physiological reactions to negative social cues. Despite this bodily response, these same individuals frequently report feeling lower levels of emotional arousal compared to non-lonely people.

This discrepancy suggests that loneliness might not simply break the brain’s ability to regulate emotion. Instead, it may disrupt the self-monitoring processes that allow a person to accurately interpret their own internal state. To investigate this, the researchers focused on the dorsolateral prefrontal cortex. This region of the brain sits just behind the forehead and acts as a control center for executive functions. It plays a primary role in top-down processing, which is the ability of higher-level thoughts to regulate lower-level emotional impulses.

The research team recruited 120 participants for the experiment. They stratified these volunteers into two distinct groups based on their scores on the Revised UCLA Loneliness Scale. One group consisted of sixty individuals who reported high levels of loneliness. The other group consisted of sixty individuals who reported low levels of loneliness. The researchers aimed to see if manipulating the activity of the dorsolateral prefrontal cortex could alter how these groups processed negative imagery.

To manipulate brain activity, the researchers employed a technique known as transcranial direct current stimulation. This non-invasive method involves placing electrodes on the scalp to deliver a weak electrical current to specific brain areas. The current can temporarily increase or decrease the excitability of the neurons underneath. In this study, participants attended two separate sessions. In one session, they received active anodal stimulation, which generally enhances neuronal activity, applied to either the left or right side of the prefrontal cortex. In the other session, they received a sham stimulation.

The sham condition served as a control. The device would ramp up to mimic the physical sensation of the stimulation starting but would then turn off. This ensured that the participants could not distinguish between the active and control sessions. This double-blind design prevented the participants’ expectations from influencing the results. While receiving the stimulation, participants sat before a computer screen while wearing a cap equipped with sensors to record electroencephalography, or EEG, data.

The researchers presented the participants with a series of images. Some of these pictures depicted negative social content, such as scenes of violence or accidents. Others depicted negative non-social content, such as spiders or snakes. Neutral images were also included as a baseline. For each image, the participants received one of two instructions. They were told either to simply “watch” the image passively or to “reappraise” it. Cognitive reappraisal is a strategy where a person mentally reframes a situation to reduce its emotional impact. For example, a participant might view a bloody scene and remind themselves that it is a fake scene from a movie.

After viewing each image, participants rated how negative they felt and how intense their emotional arousal was. Simultaneously, the EEG sensors recorded event-related potentials. These are specific changes in the brain’s electrical activity that occur in response to a stimulus. The researchers were particularly interested in the Late Positive Potential. This is a brain wave pattern that typically reflects the amount of attention and cognitive resources the brain is dedicating to an emotional stimulus.

The analysis revealed a specific effect regarding how stimulation influenced the lonely group. When highly lonely participants received active stimulation to the left dorsolateral prefrontal cortex, they rated negative social images as less unpleasant compared to the sham condition. This change in perceived valence occurred during the passive watching condition. This suggests that boosting activity in the left frontal lobe helped lonely individuals dampen their immediate, automatic negative evaluation of social threats.

The physiological data provided a layer of complexity to these behavioral findings. Despite the lonely participants reporting that they felt less negativity, their brain wave patterns did not show a corresponding drop in activity. The electrical markers of emotional processing remained similar between the active and sham conditions for this group. This finding aligns with the researchers’ earlier hypothesis regarding a disconnect in self-awareness. It appears that loneliness may impair the ability to map internal physiological responses onto conscious feelings. The stimulation altered the subjective report without necessarily changing the underlying neural magnitude of the threat response.

The study also yielded results regarding the general mechanism of cognitive reappraisal across all participants. When the researchers analyzed the data for the entire sample, they found that active stimulation enhanced the neural modulation associated with reappraisal. Specifically, there was a larger difference in the Late Positive Potential between the reappraisal condition and the passive watching condition during active stimulation. This effect was specific to social stimuli.

This indicates that the stimulation successfully helped the brain engage the neural circuits required to regulate emotions. However, a divergence appeared here as well. While the brain data showed enhanced regulation, the participants rated the images as more negative during the reappraisal trials under active stimulation than they did under sham stimulation. This implies that while the brain was working harder to reframe the images, the participants subjectively felt that their attempts at regulation were less effective.

The authors interpret these findings as evidence that the left and right sides of the prefrontal cortex may have distinct roles. Previous studies have often linked the right side to deliberate cognitive control and the left side to more automatic emotional processing. The current results support the idea that the left dorsolateral prefrontal cortex helps modulate spontaneous affective evaluations. For lonely individuals, whose automatic processing of social threats may be biased, stimulation of this region provided a specific benefit in reducing subjective distress.

There are limitations to the study that warrant consideration. The use of electrical stimulation during EEG recording can introduce noise into the data, which requires extensive processing to remove. This can sometimes affect the clarity of the brain signals. The experimental task was also relatively brief to fit within the time window where the electrical stimulation is most effective. In daily life, regulating emotions in response to social isolation is a prolonged process that may not be fully captured by looking at a picture for a few seconds.

Additionally, the study relied on young adult participants. It is not yet clear if these findings would apply to older adults, who are often the focus of loneliness research. The researchers also note that they did not include a direct measure of metacognition, or thinking about thinking. Future studies would benefit from asking participants to explicitly evaluate how well they think they are tracking their own emotions.

Despite these caveats, the research highlights that loneliness is not merely a problem of feeling too much or regulating too little. It involves a complex mismatch between the brain’s automatic reactions and the individual’s conscious experience of the social world. By showing that targeted brain stimulation can shift these subjective evaluations, the study opens potential avenues for understanding how neural interventions might one day support therapies for social isolation.

The study, “Targeted neuromodulation of the left dorsolateral prefrontal cortex alleviates altered affective response evaluation in lonely individuals,” was authored by Szymon Mąka, Marta Chrustowicz, and Łukasz Okruszek.

The biological roots of the seven deadly sins might start in the womb

18 February 2026 at 18:00

PsyPost’s PodWatch highlights interesting clips from recent podcasts related to psychology and neuroscience.

On Monday, February 9, the Huberman Lab podcast, hosted by Andrew Huberman, featuring Dr. Kathryn Paige Harden, explored the biological roots of human behavior. Harden is a professor of psychology at the University of Texas at Austin, known for her research on how genetic factors influence social outcomes. The episode focused on how DNA interacts with early brain development to shape complex traits like risk-taking, morality, and antisocial behavior.

At roughly the 20-minute mark, Harden discusses the “seven deadly sins” through the lens of modern science. She explains that behaviors often labeled as sinful, such as aggression, addiction, and promiscuity, share a common genetic foundation. These traits are not located in a single brain area but are influenced by many genes that affect how the brain develops before birth.

Harden notes that these genetic influences peak during the second and third trimesters of pregnancy. They appear to regulate the balance between the brain’s inhibitory system, involving a chemical called GABA, and its excitatory system, involving glutamate. She suggests that conditions like substance use disorders should be viewed as neurodevelopmental issues similar to ADHD, stemming from early differences in brain wiring.

The conversation highlights three personality dimensions that drive risky behavior. These include sensation-seeking, which is a drive for intense experiences, and disinhibition, which is a lack of self-control. The third dimension is antagonism or callousness, characterized by an indifference to the negative consequences one’s actions have on others.

When addressing the role of trauma, Harden describes the relationship between nature and nurture as a woven tapestry. She explains that parents pass down both genetic risks and environmental conditions to their children. This makes it difficult to separate the effects of inherited biology from the impact of a chaotic or traumatic upbringing.

The discussion shifts to the use of polygenic scores, which are tools used to estimate a person’s genetic likelihood for certain outcomes. Harden warns against “genetic essentialism,” or the belief that DNA defines a person’s true identity or destiny. She notes that receiving information about genetic risks can alter how people view themselves and their potential, sometimes leading to a sense of fatalism.

You can listen to the full interview here.

Ibogaine appears to trigger an accelerated “auto-psychotherapy” process during PTSD treatment

18 February 2026 at 17:00

A new study published in npj Mental Health Research suggests that U.S. Special Operations veterans treated with a combination of magnesium and ibogaine experience a rapid, self-directed form of psychological healing. The findings suggest that the treatment triggers a state of “auto-psychotherapy,” where patients revisit traumatic memories, reframe their life narratives, and feel a physical sense of brain repair.

Ibogaine is a powerful psychoactive substance derived from the root bark of the African shrub Tabernanthe iboga. While it has been used traditionally for centuries in West-Central Africa, it has gained attention in modern medicine for its potential to treat addiction and severe mental health conditions.

However, ibogaine interacts with the heart in ways that can be dangerous. To mitigate these risks, the treatment protocol in this study combined ibogaine with magnesium, a mineral that supports heart health and nervous system stability.

Previous observational studies have indicated that ibogaine can lead to reductions in symptoms of depression, anxiety, and substance use disorders. Despite these promising clinical outcomes, scientists have not fully understood what the patient actually experiences during the treatment that leads to recovery.

Most prior research focused on numbers and symptom checklists rather than the personal story of the patient. In their new study, the researchers aimed to bridge that gap. They sought to characterize the specific thoughts, emotions, and sensations veterans experienced to see if these subjective effects could explain their rapid recovery.

“A major motivation was the gap between strong clinical improvements being reported and limited understanding of patients’ lived healing processes,” said study author Clayton Olash, a psychiatry resident physician at the Medical University of South Carolina and affiliate researcher at Stanford University’s Brain Stimulation Laboratory.

“There is an active debate about whether psychedelic outcomes are primarily pharmacologic and neurobiological, or whether subjective experience and meaning-making are central to change. We wanted to characterize the lived psychological process in veterans who showed substantial recovery, and examine whether their reports aligned with established therapeutic frameworks.”

The study involved 30 male U.S. Special Operations Veterans. These participants had extensive histories of combat deployments and traumatic brain injuries, often caused by blast exposure. At the start of the study, the majority met the clinical criteria for post-traumatic stress disorder, and many suffered from major depressive disorder or alcohol use disorder. The participants traveled to a specialized clinic in Tijuana, Mexico, to receive the treatment.

The procedure was highly structured. Participants underwent medical screenings and preparatory sessions before the treatment day. On the day of dosing, they received intravenous magnesium to protect the heart. They then took oral ibogaine, with the dosage calculated specifically for their body weight. During the active phase of the drug, which can last many hours, the veterans lay on mats with eyeshades to encourage internal reflection. Medical staff monitored them closely throughout the process.

To capture the subjective nature of the experience, the researchers asked the veterans to answer three open-ended questions shortly after their treatment. The scientists analyzed these written narratives using a method called constructivist grounded theory. This is a research technique where scientists read the text multiple times to identify recurring patterns and group them into broader themes. This allows the data to tell a story rather than forcing it into pre-existing categories.

The analysis revealed that the veterans engaged in a process the scientists described as “accelerated auto-psychotherapy.” This term refers to a condensed, self-guided therapeutic process where the patient achieves deep insights without the immediate direction of a talk therapist. The researchers identified four primary domains of experience that defined this process.

The first domain was characterized as dialogic trauma re-appraisal. Veterans reported that they were able to recall painful or repressed memories with vivid clarity. However, unlike in a flashback where the person feels the original terror, these participants viewed the events with a sense of detachment. Many described an internal dialogue with a “guide” or “teacher” that helped them view their trauma from a new, less self-critical perspective. This allowed them to process events that had haunted them for decades.

“What surprised me most was the depth and consistency of reported psychological reprocessing across participants,” Olash told PsyPost.

The second theme involved an altered sense of self and mystical connectedness. Participants frequently described a dissolution of their ordinary ego or identity. Some reported feeling as though they were a “witness” to their own life, separating their core consciousness from their history and pain. This state often included feelings of awe and a sense of merging with a divine presence or the universe. This shift in perspective appeared to help veterans break free from rigid, negative beliefs about themselves.

The third theme centered on emotional resolution. The narratives contained frequent descriptions of intense emotional release. Veterans reported finding relief from chronic guilt, shame, and anger. In place of these heavy emotions, they experienced surges of forgiveness and compassion, both for themselves and for the people in their lives. This emotional breakthrough often led to a renewed desire to connect with family and friends.

The final theme was described as embodied healing. A significant number of participants reported physical sensations that they interpreted as their brains being repaired. They used metaphors involving electricity, rewiring, or scrubbing to describe the feeling of their neural pathways resetting. While this was a subjective sensation, it coincided with the veterans reporting improved mental clarity and a reduction in the physical symptoms associated with their brain injuries.

“I also found it remarkable how often participants linked subjective healing experiences to a felt sense of neural restoration, especially when previous work in this same cohort has shown that they experienced objective cognitive recovery from symptoms of TBI,” Olash said.

“In practical terms, these were not subtle changes in many cases, participants often described rapid and meaningful improvement in symptoms and functioning. Because this was qualitative work, our goal was depth and mechanism-rich description rather than effect size estimation. So the significance is best understood as clinically meaningful lived change that helps generate testable hypotheses for larger controlled studies.”

These themes suggest that ibogaine functions differently than standard psychiatric medications, which often work by suppressing symptoms. Instead, the substance appears to induce a dream-like state that lowers psychological defenses. This allows the brain to process suppressed information and reorganize its understanding of past trauma. The researchers noted that these experiences align with concepts found in established therapies, such as cognitive behavioral therapy and exposure therapy, but occur at a much faster rate.

“The key takeaway is that, in carefully structured settings, psychedelic treatment can involve deep psychological change and not just temporary intoxication,” Olash told PsyPost. “Participants described shifts in beliefs, emotional processing, trauma-related meaning, and sense of self that mapped onto concepts seen in psychotherapy. At the same time, these are powerful interventions with real risks, so this is not a casual or unsupervised treatment model.”

While the results provide insight into the potential of ibogaine, there are limitations to consider. The study population consisted exclusively of male Special Operations veterans, so the findings may not apply to women or civilians with different types of trauma. The data relied on retrospective accounts written days after the treatment, which can be subject to memory errors. Additionally, the study was open-label and observational, meaning there was no control group receiving a placebo for comparison.

It is also important to note that the researchers who analyzed the data were interpreting subjective narratives, which introduces a degree of potential bias. For instance, the study does not prove that the physical sensation of “brain rewiring” corresponds to actual biological repair.

“These qualitative findings are hypothesis-generating and should not be overgeneralized beyond the population and treatment conditions studied,” Olash said.

The scientists emphasize that ibogaine is a potent substance with serious risks if not administered in a medical setting. Future research aims to combine these narrative accounts with neuroimaging technology. This would allow scientists to see if the subjective feelings of healing map onto observable changes in brain structure and function. The researchers hope that by understanding these mechanisms, they can develop safer and more effective treatments for complex neuropsychiatric disorders.

“My long-term goal is to understand how acute altered states can be translated into durable clinical traits and recovery,” Olash explained. “That includes studying how psychedelic interventions might be paired with mindfulness, psychotherapy, and brain stimulation approaches to improve durability and personalization of outcomes. I am interested in mechanism-focused, clinically grounded research that can inform safer and more effective psychiatric care.”

“I think psychiatry is entering a period where novel interventions may meaningfully expand the treatment landscape for people who have not responded to conventional options. That said, careful screening, medical oversight, and rigorous science are essential. I hope this work encourages thoughtful public discussion and more high-quality research rather than hype.”

The study, “Accelerated recovery using magnesium ibogaine: characterizing the subjective experience of its rapid healing from neuropsychiatric disorders,” was authored by Clayton Olash, Derrick Matthew Buchanan, Randi Brown, Afik Faerman, Kirsten Cherian, George Lin, David Spiegel, James J. Gross and Nolan Williams.

Stanford researcher explains how beliefs alter physical reality

18 February 2026 at 16:00

PsyPost’s PodWatch highlights interesting clips from recent podcasts related to psychology and neuroscience.

On Thursday, November 20, The Psychology Podcast, hosted by Dr. Scott Barry Kaufman, featuring Dr. Alia Crum, explored the science behind the mind-body connection. Dr. Crum is a principal investigator at the Stanford Mind & Body Lab who studies how subjective mindsets can alter objective physiological realities. The episode focused on her groundbreaking experiments regarding the placebo effect, exercise, and the biological impact of our beliefs about food.

At roughly the 8-minute mark, Dr. Crum describes an experiment she conducted with hotel housekeepers to test the placebo effect outside of a clinical setting. She discovered that while these women were physically active during their shifts, they did not believe they exercised enough to be healthy. Once researchers informed the workers that their job met the Surgeon General’s fitness guidelines, the women experienced measurable drops in weight and blood pressure despite not changing their daily behaviors.

The conversation shifts to diet and metabolism around the 10-minute mark, specifically focusing on the hormone ghrelin. This biological chemical signals hunger to the brain and slows down metabolism when the body thinks it needs food. Dr. Crum explained that typically, ghrelin levels drop after a person eats a large meal to signal that the body is full.

To test if the mind could influence this biological process, researchers gave participants identical milkshakes containing roughly 380 calories. One group was told the drink was an indulgent, high-calorie treat, while the other group believed they were drinking a low-calorie, sensible diet shake. The team then measured the ghrelin levels in the participants’ bloodstreams to see how their metabolisms reacted.

The results revealed that those who believed they consumed the high-calorie shake experienced a drop in ghrelin three times greater than the group that believed they drank the diet shake. This indicates that the body’s physical satiety response is driven partly by the mental expectation of how much food was consumed. Dr. Crum noted that thinking a meal is “sensible” or “restricted” might actually keep hunger hormones high and metabolism slow, counteracting the goals of a diet.

You can listen to the full interview here.

Psychologists developed a 20-minute tool to help people reframe their depression as a source of strength

18 February 2026 at 15:00

New research published in the Personality and Social Psychology Bulletin provides evidence that changing how people view their past struggles with depression can improve their ability to achieve life goals. The study suggests that reframing depression as a sign of strength, rather than weakness, boosts self-confidence and tangible goal progress. This psychological shift helped participants make nearly 50 percent more progress on their personal objectives over a two-week period compared to those who did not receive the intervention.

Depression is a widespread mental health condition that often hinders a person’s ability to pursue their ambitions, such as career advancement, hobbies, or maintaining relationships. While the biological symptoms of the illness, such as fatigue and lack of motivation, certainly play a major role, the researchers suspected another social factor was at play.

Society often stigmatizes depression, promoting a narrative that paints those who suffer from it as inherently weak or damaged. The scientists hypothesized that this societal label of “weakness” becomes internalized by individuals. This acts as a mental barrier even when they are not currently experiencing severe symptoms.

“Before our study, it was not clear why people who have experienced depression can experience goal pursuit problems even after their depressive symptoms have faded,” said study author Christina A. Bauer of the University of Vienna.

The researchers reasoned that if people believe they are fundamentally flawed because of their depression, they may lack the confidence to strive for their goals. This creates a self-fulfilling prophecy where the fear of weakness leads to actual struggles in achievement. The team wanted to test if flipping this narrative could restore confidence. They aimed to show that the struggle against depression is actually proof of resilience, perseverance, and emotional intelligence.

To test this theory, the researchers conducted three separate experiments involving a total of 748 participants. All participants were adults who had previously been prescribed anti-depressants, indicating they had experienced depression at a clinical level.

In the first experiment, the researchers recruited 158 participants and randomly assigned them to one of two groups. The control group engaged in an active control task where they read factual information about depression from the American Psychiatric Association and reflected on their own experiences. This ensured that any differences found were not simply due to thinking about depression. The second group participated in the “depression-reframing” intervention.

The depression-reframing exercise was a brief session lasting about twenty minutes. Participants read stories from others who described how dealing with depression required strength. These stories highlighted qualities like perseverance and the ability to manage difficult emotions. After reading these examples, participants were asked to write a reflection on how their own battle with depression demonstrated their personal strength. They were also asked to frame this as advice to help others, which is a technique often used to reinforce new ways of thinking.

The results of the first experiment showed that those who completed the reframing exercise reported higher levels of general self-efficacy. In psychology, self-efficacy is a person’s belief in their ability to succeed in specific situations or accomplish a task. Essentially, the participants who viewed their depression as a source of strength felt more capable of handling life’s challenges compared to those in the control group.

The second experiment involved a larger group of 419 participants. This study aimed to replicate the initial findings and see if the boosted confidence applied to specific, real-world goals. After completing the same reframing or control exercises as in the first study, participants identified a specific personal goal they wanted to achieve in the next two weeks. These goals varied widely, ranging from physical self-care, like exercising three times a week, to work-related tasks, like finishing an assignment.

The researchers found that the reframing exercise not only increased general confidence but also boosted commitment to these specific personal goals. To understand why this happened, the researchers analyzed the data further. They found that the exercise changed how participants viewed the compatibility between their illness and their success.

In the control group, 71 percent of participants felt that the strengths needed to achieve their goals did not describe people with depression. In the reframing group, this figure dropped significantly to 52 percent. By seeing depression as compatible with being strong, participants felt more empowered to pursue their objectives.

The final experiment was a longitudinal study, meaning the researchers tracked participants over time to measure actual progress. They recruited 171 individuals who had experienced depression. As in the previous rounds, participants completed either the reframing exercise or the control activity and set a specific goal. The researchers then contacted the participants two weeks later to assess their progress.

The difference between the two groups was substantial. Participants in the control group reported that they had completed about 43 percent of their goal after two weeks. In contrast, those who underwent the depression-reframing exercise reported reaching 64 percent completion. This represents a 49 percent increase in goal progress driven by the brief psychological intervention.

“The size of the reframing-intervention acknowledging strengths of people with depression was as big as moving people from a heavy to a moderate depression,” Bauer told PsyPost. “This highlights the severity of stigma effects of depression – in addition to the effects of depression as a mental disease.”

Additionally, the third experiment explored how participants might handle a relapse. Participants were asked to imagine they were experiencing severe depressive symptoms again. Those who had completed the reframing exercise indicated they would treat themselves with more compassion and respect in such a scenario compared to the control group. This suggests that the intervention might help build resilience against future episodes of illness.

“We show that in addition to the symptoms of depression (e.g., fatigue based on hormonal imbalances), stigmatizing narratives about depression – the idea that people who have experienced depression are weak people – can undermine successful goal pursuit, too,” Bauer explained. “Even when depression is over, people can, based on stigmatizing societal narratives, still think they are weak people, which can undermine their confidence in themselves, and make goal pursuit more difficult.”

“The solution we tested: better acknowledging one’s strength in the face of depression can help. When you or your loved ones experience depression, don’t overlook the strength it often takes to deal with depression – to fight the urge to stay in bed all day, and to continue living one’s life despite all the obstacles depression brings with it. This ‘reframing of depression’ we developed can help people better see their strength and pursue their goals in lifes, as we show.”

While these findings offer a promising avenue for supporting people with depression, there are limitations to consider. The study relied on self-reported measures of goal progress rather than objective observations. It is possible that participants in the intervention group simply felt more optimistic about their progress, though self-reports generally correlate well with actual behavior in psychological research. Future research could benefit from using objective data, such as fitness tracker logs or workplace performance records, to verify these improvements.

Another limitation involves the duration of the effect. The study tracked participants for only two weeks. It remains unclear how long the boost in confidence and goal pursuit lasts without a “booster” session to reinforce the message. The researchers also note that the participants were recruited from online platforms and were mostly from Western countries. It is not yet known if this specific type of reframing would work as effectively in different cultural contexts where the concept of the self and individual achievement might be viewed differently.

The researchers emphasize that this intervention is not a replacement for traditional treatments like therapy or medication. “Of course, treating depression as a disease itself (e.g., through psychotherapy, or medication) remains key,” Bauer noted. “Our intervention approach addressing stigma complements, but does in no way replace these approaches.”

Future research plans to explore if this “strength-based” approach could apply to other stigmatized groups, Bauer said. The researchers suggest that individuals who have survived trauma or who live with chronic physical illnesses might also benefit from reframing their struggles as evidence of their resilience. Recognizing the hidden strength in these experiences could offer a scalable, low-cost way to support mental well-being and personal growth across a variety of populations.

The study, “Depression-Reframing: Recognizing the Strength in Mental Illness Improves Goal Pursuit Among People Who Have Faced Depression,” was authored by Christina A. Bauer, Gregory M. Walton, Jürgen Hoyer, and Veronika Job.

Larger left hippocampus predicts better response to antidepressant escitalopram

18 February 2026 at 05:00

A study of individuals suffering from moderate to severe depression in Japan found that those with larger volumes of the left hippocampus region of the brain and greater leftward laterality were more likely to respond to treatment with escitalopram (i.e., to experience a reduction in depression symptoms). Also, the volume of the right hippocampus and the right hippocampal head of these individuals increased more in response to this medication. The paper was published in Translational Psychiatry.

Major depressive disorder is a common mental health condition characterized by persistent low mood and loss of interest or pleasure in daily activities. It goes beyond normal sadness and significantly interferes with functioning at work, school, and in relationships. Core symptoms include depressed mood, anhedonia, fatigue, and feelings of worthlessness or excessive guilt. Many individuals also experience changes in sleep, appetite, concentration, and psychomotor activity.

Despite the large number of people worldwide suffering from depression, treatments for depression are still lacking. Studies indicate that at least 30% of people suffering from depression do not experience a remission of symptoms after completing two antidepressant treatment protocols, reclassifying their condition into treatment-resistant depression. Other studies indicate that less than 10% of individuals seeking help for depression receive an effective treatment.

Because of this, research into ways to make anti-depression treatments more effective is a topic of great scientific interest. One of the paths these researchers take is looking into ways to identify individuals who will respond to standard treatments for depression and those who will not.

Study author Toshiharu Kamishikiryo and his colleagues explore the relationship between the structural characteristics of the brain, their changes, and escitalopram treatment for depression. Escitalopram is a medication from the group of selective serotonin reuptake inhibitors that is commonly used to treat depression (i.e., major depressive disorder).

It is known that this medication promotes the creation of new neurons (neurogenesis) in the dentate gyrus of the hippocampus region of the brain, thereby altering its structure. Study authors wanted to see whether these changes are associated with the response to treatment (i.e., with a reduction of depressive symptoms as a result of taking escitalopram).

Study participants were 107 individuals suffering from moderate to severe depression. 52% of them were women. Their average age was 42 years and it ranged between 25 and 73 years. They were treated with escitalopram because of their depressive symptoms.

The participants completed magnetic resonance imaging of their brains at two time points. The first imaging was done, on average, 7-8 days after they started escitalopram treatment. The second imaging was completed after 55 days of treatment, on average. However, only 71 participants or 66% of them participated in the second data collection.

At these two time points, study participants also completed assessments of depression symptoms (HRSD-17 and HRSD-6). If the score at the second timepoint was reduced at least 50% compared to the start of the study, study participants were considered responders i.e., it was considered that they responded to treatment. Non-responders were participants whose reduction in score was lower than 50%.

Results showed that around 50% of participants responded to escitalopram treatment. 34% of participants achieved remission (i.e., the symptoms were minimal). Responders did not differ from non-responders in parameters such as body mass index and age.

Looking at brain area volume, responders had larger left hippocampal volume and greater leftward laterality (i.e. their left hippocampus was larger than the right) at the start of the study compared to non-responders. Also, the right hippocampus and right hippocampal head volume increased more in responders than in non-responders, and their laterality changed in response to escitalopram treatment. These changes were larger in individuals who experienced a stronger reduction of depressive symptoms.

“This study is the first to demonstrate that increases in the volume and changes in the laterality of the right total hippocampus and right hippocampal head are involved in the treatment response to escitalopram. The response to escitalopram treatment cannot be explained fully by hippocampal volume changes alone, but it is likely that volume changes in the right hippocampus and its head play an important role in improving depressive symptoms,” the study authors concluded.

The study contributes to the scientific understanding of the neural correlates of depressive symptoms. However, it should be noted that the design of this study does not allow any definitive causal inferences to be derived from the results. Additionally, the study had a very large attrition rate leaving room for survivorship bias to have affected the results.

The paper, “Relationship between hippocampal volume and treatment response before and after escitalopram administration in patients with depression,” was authored by Toshiharu Kamishikiryo, Eri Itai, Yuki Mitsuyama, Yoshikazu Masuda, Osamu Yamamoto, Tatsuji Tamura, Hiroaki Jitsuiki, Akio Mantani, Norio Yokota, and Go Okada.

An AI analyzed wine reviews and found a surprising link to personality

18 February 2026 at 03:00

Your choice of a heavy Cabernet Sauvignon over a light Pinot Grigio might reveal more about your psyche than your palate. New research suggests that specific personality traits, such as openness and extraversion, are reliable predictors of a consumer’s preference for alcohol strength in wine. These findings appeared in the Journal of Personality.

Psychologists utilize a framework known as the Big Five to categorize human personality. This model divides character into five distinct dimensions. These are openness, conscientiousness, extraversion, agreeableness, and neuroticism. Openness measures a person’s desire for new experiences and intellectual curiosity. Conscientiousness tracks discipline and organization. Extraversion involves sociability and enthusiasm. Agreeableness reflects a tendency toward cooperation and social harmony. Finally, neuroticism gauges emotional instability and sensitivity to stress.

Marketers and scientists have previously studied how these traits influence general shopping habits. However, few studies have looked at how personality dictates the specific chemical properties of the products we buy. In the world of wine, alcohol content is a primary characteristic. It is measured as Alcohol by Volume, or ABV. This percentage does more than determine how quickly a drinker becomes intoxicated. It also changes the texture, body, and intensity of the flavor profile.

Xi Wang, a researcher at the School of Culture and Creativity at Beijing Normal-Hong Kong Baptist University, led the investigation. Wang and colleagues sought to understand if the psychological makeup of a consumer drives them toward bolder, higher-alcohol wines or lighter, lower-alcohol options. They aimed to move beyond simple demographics. The team wanted to see if the words consumers use could unlock the secrets of their sensory preferences.

To achieve this, the researchers turned to the massive amount of data available on e-commerce platforms. They focused on textual reviews left by verified buyers. The team collected 9,917 reviews from a major online wine retailer. These reviews spanned nearly a decade of consumer activity. The dataset included the text of the review and the specific technical details of the wine purchased, including its ABV.

The researchers needed a way to translate these thousands of written reviews into psychological profiles. They employed a form of artificial intelligence known as Natural Language Processing. Specifically, they used a model called BERT. This stands for Bidirectional Encoder Representations from Transformers. This tool is designed to understand the nuances and context of human language.

Before analyzing the wine reviews, the team had to teach the AI how to recognize personality traits. They trained the model using a separate dataset called the “myPersonality” project. This project contains thousands of social media status updates linked to verified personality scores. By analyzing these updates, the AI learned which words and sentence structures correlate with specific traits. For example, it learned how an extravert writes compared to a neurotic individual.

Once the AI was trained, the researchers applied it to the wine reviews. The model read the consumers’ feedback and assigned scores for each of the Big Five traits. The team then used a statistical method called beta regression to look for patterns. They checked for connections between the inferred personality scores and the alcohol percentage of the wines those people reviewed. They controlled for factors like price, wine type, and flavor to ensure the results were specific to personality.

The analysis revealed distinct patterns in how different people select wine. Consumers who scored high in openness showed a clear preference for wines with higher alcohol content. High-alcohol wines often have a richer body and more intense viscosity. This creates a complex sensory experience. The researchers suggest that people with high openness seek out this complexity. They are naturally inclined toward novel and stimulating sensations.

A similar trend appeared for individuals high in agreeableness. These consumers also gravitated toward wines with higher ABV. The drivers here appear to be social rather than purely sensory. Agreeable individuals value social harmony and often adhere to group norms. High-alcohol wines are frequently perceived as being of higher quality or prestige. These consumers may select such wines to align with perceived social standards or to gain approval in group settings.

The results for extraversion were unexpected. One might assume that sociable, sensation-seeking extraverts would want the strongest drink. The data showed the opposite. Higher extraversion scores were linked to a preference for wines with lower alcohol content. The authors propose a functional explanation for this behavior. Extraverts thrive on social interaction. They often wish to extend their time socializing. Drinking lower-alcohol wine allows them to consume more over a longer period without becoming overly intoxicated. It is a strategy to maintain social stamina.

Neuroticism also showed a negative association with alcohol strength. Consumers who scored high on this trait tended to buy wines with lower ABV. Neuroticism is characterized by anxiety and emotional sensitivity. Stronger alcohol can amplify loss of control or lead to negative emotional spirals. These individuals likely choose lighter wines as a form of self-protection. They may be avoiding the physiological risks associated with heavy intoxication.

The trait of conscientiousness stood apart from the others. The researchers found no statistical connection between this trait and alcohol preference. Conscientious people are typically disciplined and health-conscious. This might lead them to choose lower alcohol for health reasons. However, they are also quality-oriented and goal-driven. This might lead them to choose high-alcohol wines for their perceived sophistication. These competing motivations likely cancel each other out.

The study does have limitations. The data relied on consumers who take the time to write online reviews. These individuals may not represent the average wine drinker perfectly. Their writing style might differ from the general population. Additionally, the personality scores were inferred by AI rather than measured by direct psychological testing. While the model was accurate, it is an estimation.

Future research could expand on these methods. Scientists could investigate if these personality patterns hold true for other beverages like coffee or craft beer. They could also explore how these preferences change across different cultures. The study focused on a Western e-commerce environment. Cultural norms regarding alcohol and personality expression vary globally.

The study, “From Personality to Pour: How Consumer Traits Shape Wine Preferences and Alcohol Choices,” was authored by Xi Wang, Jie Zheng, and Yiqi Wang.

❌
❌