The Eschatian Hypothesis: Why Our First Contact From Aliens May Be Particularly Bleak, And Nothing Like The Movies






A new study published in Evolutionary Psychological Science identifies five distinct strategies that women employ to detect or prevent deception from potential romantic partners. The findings indicate that introducing partners to family members and taking relationships slowly are the most common methods women use to verify a manβs honesty. These behaviors appear to function as evolutionary counter-measures against the risks of sexual exploitation in mating contexts.
Humans face a fundamental adaptive challenge in the realm of mating known as exploitation. One individual might attempt to enhance their own reproductive success at the expense of anotherβs fitness. This dynamic often involves deception, where a person misrepresents their intentions or background to gain sexual access.
Evolutionary theory suggests that women have historically faced higher costs from such deception than men have. This disparity stems from biological realities regarding parental investment. Women are obligated to invest substantial metabolic resources into offspring through gestation and lactation.
Men, conversely, can theoretically achieve reproduction with a minimal investment of time and resources. This asymmetry means that a man could walk away after a sexual encounter with few immediate consequences. A woman in the same situation would be left with the burdens of pregnancy and child-rearing without partner support.
This discrepancy likely created strong selection pressures for women to develop specific defenses. Researchers view this interaction as a form of evolutionary arms race. As men developed deceptive tactics to secure short-term mating opportunities, women likely co-evolved detection strategies to protect themselves.
βThe core concept draws from the evolutionary arms race between measures of exploitation and counter-exploitation, as previously examined in studies of rape avoidance mechanisms that mitigate the high costs of rape as an exploitative strategy. A milder form of intersexual conflict manifests in sexual deception, yet a key research gap persisted regarding womenβs specific counter-strategies to this form of exploitation,β said study author Peyman Sayyad of the Shams Higher Education Institute.
The researchers sought to catalog womenβs specific anti-deception tactics. They aimed to understand how these behaviors are structured and what personality traits influence their use. The researchers conducted two separate investigations to explore this topic.
The first study utilized a qualitative approach to generate a broad list of potential behaviors. The research team recruited 147 female undergraduate students from a large public university in the Southeastern United States. The average age of these participants was approximately 19 years old.
Participants answered open-ended questions about what actions they or other women take to avoid being deceived in dating contexts. They were asked to describe specific things they might do, such as asking friends for verification. They also listed things they might avoid doing, such as rushing into intimacy.
The researchers and a graduate student independently analyzed these written responses. They worked to eliminate vague or redundant answers to create a consolidated list. This process resulted in the identification of 43 distinct anti-deception acts that women might perform.
The second study involved a new group of 249 female participants recruited from the same university setting. The sample was predominantly White, though it included participants from various ethnic backgrounds. Approximately 44 percent of the sample reported being in a relationship at the time of the study.
These participants reviewed the list of 43 behaviors identified in the first phase. They rated how likely they would be to perform each action on a scale ranging from βto no extentβ to βto a very high extent.β This allowed the researchers to quantify which strategies are most prevalent.
The researchers also administered standard psychological questionnaires to assess individual personality differences. Participants completed the Mate Value Scale to rate their own self-perceived desirability as a partner.
They also completed the revised Sociosexual Orientation Inventory. This inventory measures an individualβs willingness to engage in uncommitted sexual activity. It assesses past sexual behavior, attitudes toward casual sex, and sexual desire. Higher scores on this measure indicate a more unrestricted sociosexuality, meaning a preference for short-term mating.
The researchers also measured attachment styles using the Experiences in Close Relationships Scale. This specifically looked at avoidant attachment, which involves discomfort with intimacy. Finally, the researchers assessed neuroticism using a short form of the Big Five Inventory.
Statistical analysis of the survey responses revealed that the anti-deception tactics clustered into five main categories. The researchers labeled the first and most frequently considered category as βIntegration.β This domain involves introducing a potential partner to family members or meeting his family.
Integration serves as a robust vetting mechanism. Involving family allows a woman to verify a partnerβs background and intentions through the scrutiny of kin. This finding aligns with historical patterns where families played a central role in mate selection.
The second most common domain was labeled βReticence.β This strategy focuses on slowing down the pace of the relationship to prevent premature emotional attachment. Tactics in this category include avoiding rushing into commitment or delaying sexual intimacy until trust is firmly established.
By maintaining distance, a woman can observe a partnerβs behavior over time. This reduces the risk of overlooking red flags due to the blinding effects of infatuation. It provides a longer window for deceptive signals to become apparent.
The third domain identified was βSocial Media.β This involves researching a partnerβs online presence or checking the profiles of his friends. Women might look for inconsistencies between what a man says and what his digital footprint reveals.
The fourth category was βReligion Matching.β This entails seeking partners with shared religious beliefs or ensuring a partner is a practicing believer. This strategy relies on the heuristic that religious individuals may adhere to stricter moral codes regarding honesty and fidelity.
The least common strategy was labeled βDistrust.β This category includes more active and confrontational tactics. For example, a woman might ask questions to which she already knows the answer to test a partnerβs honesty.
βWomen might employ diverse strategies to counter sexual deception in mating and dating contexts,β Sayyad told PsyPost. βThese include familial oversight, religion, and modern cultural mechanisms like social media.β
The researchers also found associations between these strategies and individual personality traits. Women who were more open to short-term mating were less likely to use Integration or Religion Matching tactics. This suggests that women focused on casual relationships may prioritize these long-term vetting mechanisms less.
For women pursuing short-term mating, the goal is often immediate sexual access rather than long-term resource provisioning. As a result, the deep vetting provided by family integration or religious alignment may be viewed as unnecessary obstacles.
Additionally, the researchers found a link between attachment style and behavior. Women with higher levels of avoidant attachment were more likely to use Reticence tactics. These individuals often feel uncomfortable with intimacy and may use distance as a protective mechanism.
This tendency to hold back serves a dual purpose for avoidantly attached women. It protects them from the emotional risks of intimacy while simultaneously guarding against deception. By not committing quickly, they minimize their vulnerability to exploitation.
Contrary to the researchersβ expectations, a womanβs self-perceived mate value did not predict which tactics she used. High mate value is often associated with being a target for deception. The authors hypothesized that these women would be more vigilant, but the data did not support this link.
Similarly, neuroticism did not show a significant connection to any specific anti-deception domain. Neuroticism is characterized by higher sensitivity to threat and negative emotion. The researchers expected this trait to correlate with increased vigilance, but the results were null.
There are some limitations to consider. The sample consisted entirely of undergraduate women. This demographic is relatively young and may have limited mating experience compared to older adult populations.
The specific context of the study also matters. The research focused on a modern Western environment where women have free choice in mating. This differs from ancestral environments or cultures where family members play a dominant role in arranging marriages.
The study also relied on self-reported intentions rather than observed behaviors. Participants indicated what they would do, which may not perfectly align with their actions in a real-world scenario. Future research is needed to determine the actual effectiveness of these tactics in detecting lies.
βThis study investigates womenβs counter-strategies to sexual deception within a free-choice mating context that minimizes parental involvement, diverging from ancestral conditions prevalent across much of human history,β Sayyad noted. βMoreover, such defenses may rely on domain-general adaptations to exploitation rather than deception-specific mechanisms, warranting more tests in future research. These caveats highlight opportunities for extensions.β
It is also possible that men have evolved counter-counter-strategies. If women use these specific tactics to detect deception, men may have developed ways to bypass these checks. This ongoing co-evolutionary dynamic suggests that the repertoire of deception and detection is likely complex.
The findings provide a structured framework for understanding how women navigate the risks of modern dating. They highlight that skepticism is not a singular trait but manifests through diverse behavioral strategies. These strategies appear to be deployed selectively based on a womanβs mating goals and attachment style.
βAssessing the role of parents in offspring intersexual conflicts offers a promising avenue for future research,β Sayyad added.
The study, βWomenβs Anti-Deception Tactics in Mating: A Preliminary Investigation,β was authored by Peyman Sayyad, Mazyar Bagherian, Farid Pazhoohi, and Mitch Brown.



Recent research suggests that the consumption of caffeinated beverages is linked to a measurable increase in positive feelings, particularly during the morning hours. While caffeine reliably lifts spirits, its ability to reduce negative emotions appears less consistent and does not depend on the time of day. These findings were detailed in a paper published in the journal Scientific Reports.
Caffeine is the most widely consumed psychoactive substance in the world. Estimates suggest that nearly 80 percent of the global population ingests it in some form. Common sources include coffee, tea, soda, and chocolate. Consumers often rely on these products to combat fatigue or improve their focus. Many also anecdotally report that a cup of coffee improves their general disposition.
Researchers have studied the effects of caffeine extensively in laboratory settings. These controlled environments have confirmed that the substance acts as a stimulant for the central nervous system. However, laboratories are artificial environments. They strip away the messy variables of daily life. They cannot easily account for social interactions, work stress, or the natural fluctuations of the biological clock.
Justin Hachenberger, a researcher at Bielefeld University in Germany, led a team to investigate these effects in the real world. The team sought to understand how caffeine interacts with an individualβs emotional state outside of the laboratory. They also wanted to see if factors like the time of day or social setting changed the outcome.
To understand the study, it is helpful to distinguish between βmoodβ and βaffect.β In psychology, mood typically refers to a sustained emotional state that lasts for a long period. Affect refers to short-term, reactive emotional states. These are the immediate feelings a person experiences in response to a stimulus. The researchers focused specifically on momentary affect.
The biological mechanism behind caffeine is well understood. The substance acts as an adenosine antagonist. Adenosine is a chemical that accumulates in the brain throughout the day. It binds to specific receptors and slows down nerve cell activity. This process creates the sensation of drowsiness.
Caffeine mimics the shape of adenosine. It binds to the same receptors but does not activate them. This blocks the real adenosine from doing its job. By preventing this slowdown, caffeine allows stimulating neurotransmitters like dopamine to remain active. This leads to increased alertness and potentially improved feelings of well-being.
The researchers employed a technique known as the Experience Sampling Method. This approach involves asking participants to report on their experiences repeatedly throughout the day in their natural environments. This method reduces memory errors. Participants report what they are feeling right now rather than what they remember feeling yesterday.
The investigation consisted of two separate studies involving young adults. The first study tracked 115 participants for two weeks. The second tracked 121 participants for four weeks. Participants ranged in age from 18 to 29. They used smartphones to answer short surveys seven times a day.
In each survey, participants reported whether they had consumed any caffeinated beverages in the past 90 minutes. They also rated their current feelings. They used a sliding scale to indicate how enthusiastic, happy, or content they felt. These items combined to form a score for positive affect. They also rated how sad, upset, or worried they felt. These items formed a score for negative affect.
The data showed a clear association between caffeine and positive feelings. In both studies, participants reported higher levels of enthusiasm and happiness after consuming caffeine. The statistical analysis accounted for sleep duration and sleep quality. This suggests the mood boost was not simply a result of being well-rested.
The timing of consumption played a major role in the intensity of this effect. The association between caffeine and positive affect was strongest in the first few hours after waking up. Specifically, the boost was most pronounced within 2.5 hours of rising.
This morning peak aligns with the concept of sleep inertia. This is the groggy transition period between sleep and full wakefulness. The researchers propose that caffeine may help individuals overcome this state more effectively. It helps jump-start the sympathetic nervous system. As the day progressed, the link between caffeine and positive feelings weakened.
The results regarding negative affect were different. The researchers hypothesized that caffeine would reduce feelings of sadness or worry. The data only partially supported this. A reduction in negative affect was observed in the second, longer study. It was not observed in the first study.
Unlike positive feelings, the reduction in negative feelings did not change based on the time of day. If caffeine helped mitigate sadness, it did so regardless of whether it was morning or evening. This suggests that the mechanisms driving positive and negative affect may differ.
The study also examined whether the context of consumption mattered. The researchers looked at whether participants were alone or with others. They also asked about levels of tiredness.
Tiredness acted as a moderator for the effect. Participants who felt more tired than usual experienced a greater increase in positive affect after consuming caffeine. This supports the common use of caffeine as a countermeasure against fatigue.
Social context also influenced the results. The link between caffeine and positive affect was weaker when participants were around other people. This finding is somewhat counterintuitive. One might expect socializing over coffee to boost mood further.
The authors suggest a βceiling effectβ might be at play. Social interaction often increases positive affect on its own. If a person is already feeling good because they are with friends, caffeine may not be able to push their positive feelings much higher. The chemical effect becomes less noticeable amidst the social stimulation.
The researchers also looked for differences based on individual traits. They collected data on participantsβ habitual caffeine intake. They also screened for symptoms of anxiety and depression using standardized questionnaires.
Surprisingly, these individual differences did not alter the results. The relationship between caffeine and mood remained consistent across the board. Frequent consumers did not show a different pattern of emotional response compared to lighter users.
This challenges the βwithdrawal reversalβ hypothesis. Some scientists argue that caffeine only makes people feel better because it cures withdrawal symptoms. If that were the only factor, heavy users would experience a massive boost while light users would feel little. The consistency across groups suggests there may be a direct mood-enhancing effect beyond just fixing withdrawal.
Hachenberger noted this consistency in the press materials. He stated, βWe were somewhat surprised to find no differences between individuals with varying levels of caffeine consumption or differing degrees of depressive symptoms, anxiety, or sleep problems. The links between caffeine intake and positive or negative emotions were fairly consistent across all groups.β
However, there are caveats to consider. The study relied on self-reports. While the sampling method is robust, it still depends on participant honesty and accuracy. The sample consisted entirely of young adults. The way an 18-year-old metabolizes caffeine may differ from that of an older adult.
Additionally, the study is observational. It shows a correlation but cannot prove causation. It is possible that people who are already in a good mood are more likely to seek out coffee. However, the use of within-person analysis helps control for this to some degree.
There is also the question of anxiety. High doses of caffeine can induce jitteriness and anxiety. The study did not find a link between caffeine and increased worry. However, the researchers note that individuals prone to caffeine-induced anxiety might avoid the substance entirely. These people would naturally exclude themselves from a study on caffeine consumption.
The researchers recommend future studies use more objective measures. Wearable technology could track heart rate and skin temperature. This would provide precise physiological data to match the psychological reports. Tracking the exact moment of consumption, rather than a 90-minute window, would also improve precision.
Understanding these daily fluctuations helps paint a clearer picture of human behavior. It moves the science of nutrition and psychology out of the lab and into the rhythm of daily life. For now, the data supports the habit of the morning coffee. It appears to be an effective tool for boosting positive engagement with the day, particularly in those first groggy hours.
The study, βThe association of caffeine consumption with positive affect but not with negative affect changes across the day,β was authored by Justin Hachenberger, Yu-Mei Li, Anu Realo, and Sakari Lemola.



A new study published in Social Psychological and Personality Science challenges the conventional wisdom regarding the relationship between self-discipline and happiness. The findings suggest that psychological well-being acts as a precursor to self-control rather than a result of it. This research indicates that individuals who prioritize their emotional health may be better equipped to pursue long-term goals than those who rely solely on willpower.
Psychology has traditionally viewed self-control as a prized human capacity that is essential for a successful life. The general assumption holds that the ability to resist short-term temptations in favor of long-term goals leads to better health, career success, and financial security. By extension, scholars and the public alike often assume that exercising high self-control leads to increased happiness and life satisfaction.
Despite the popularity of this belief, the scientific evidence supporting a direct causal link from self-control to well-being has been inconclusive. Many previous studies relied on correlational data, which can show that two things are related but cannot determine which one causes the other. Other studies that attempted to track these variables over time faced methodological issues that made it difficult to draw firm conclusions about directionality.
βOur work was driven by a significant gap in the existing research. For years, psychologists have operated under the strong assumption that self-control is a key driver of well-being,β said study author Lile Jia, an associate professor at the National University of Singapore and director of the Situated Goal Pursuit (SPUR) Lab.
βThe narrative is that if you are more disciplined, you will be happier and more satisfied with life. However, when we examined the scientific literature, the causal evidence for this claim was surprisingly weak and fraught with issues. Most studies were correlational, and the few longitudinal studies attempting to establish causality had methodological limitations that made their conclusions ambiguous.β
βAt the same time, there are strong theoretical reasons to suspect the causal arrow might point in the opposite direction,β Jia explained. βFor example, Barbara Fredricksonβs βbroaden-and-buildβ theory suggests that positive emotionsβa core component of well-beingβbroaden our mindset and help us build personal resources. We reasoned that these resources could, in turn, facilitate better self-control.β
βSo, the central motivation was to rigorously test these competing causal pathways. We wanted to clarify the directionality of this important relationship between self-control and well-being using more robust statistical methods (the RI-CLPM) and a three-wave longitudinal design, which is better suited for making causal inferences than the two-wave designs used in prior work.β
The researchers conducted two separate longitudinal studies. Study 1 involved 377 working adults recruited from an Asian country. The participants were part of a larger project regarding career development and lifelong learning.
The researchers collected data from these participants at three distinct time points, with each wave separated by a six-month interval. This design allowed the team to track changes within the same individuals over a period of one year. To measure self-control, the participants completed a 20-item scale that assessed their ability to inhibit impulses, initiate work, and continue good behaviors.
For the assessment of well-being, the participants responded to a scale designed to be culturally appropriate for the population. This measure included items asking about their levels of happiness, self-worth, and appreciation for life. The team also utilized a statistical technique known as the random intercept cross-lagged panel model.
This specific analytical approach is significant because it separates stable personality traits from temporary fluctuations within a person. It allowed the researchers to determine if a specific increase in well-being at one time point predicted a subsequent increase in self-control at the next time point. By isolating these within-person changes, the model provides a stronger test for potential causal influence than traditional methods.
The results from the first study revealed a pattern that contradicted the traditional narrative. Earlier levels of self-control did not reliably predict improvements in well-being six months later. Simply exercising discipline did not appear to make participants happier in the future.
In contrast, the data supported the reverse hypothesis. Participants who reported higher levels of well-being at one time point exhibited greater self-control at the next measurement wave. Feeling well appeared to function as a precursor to functioning well.
To ensure these findings were not specific to one culture or time interval, the researchers conducted a second study. Study 2 recruited a larger sample of 1,299 working adults in the United States. This study followed a similar three-wave design but utilized a shorter time frame to capture more immediate effects.
Participants in the American sample completed surveys once a month for three consecutive months. They answered the same self-control questions used in the first study. To measure well-being, they completed a scale assessing positive feelings, optimism, and vitality.
The analysis of the American data yielded results that mirrored those of the Asian sample. High levels of self-control at the start of a month did not lead to increased well-being the following month. The anticipated reward of happiness following disciplined behavior did not materialize in the short term.
However, the reverse relationship remained significant and positive. Individuals who felt more optimistic and energetic at the beginning of the month demonstrated better self-control a month later. This replication across two different cultures and timeframes provides robust evidence that the primary direction of influence flows from well-being to self-control.
βThe most surprising result was the consistent lack of evidence for the popular belief that self-control predicts later well-being,β Jia told PsyPost. βGiven how deeply this idea is embedded in both scientific thinking and popular culture, we expected to see at least a small effect in that direction. To find that the data from two separate studies so clearly supported only the path from well-being to self-control was quite striking. It really challenges a foundational assumption and underscores the need to re-evaluate how we think about these two critical aspects of a good life.β
The researchers conducted supplementary analyses to further check these patterns. In the first study, participants also provided daily reports of their mood and behavior for a week. These daily records showed that while positive emotions predicted self-control months later, self-control did not uniquely predict daily positive emotions when general well-being was taken into account.
The researchers propose that positive emotions may help replenish the mental energy required to resist temptations and stick to difficult tasks. When people feel good, they may be more open to challenges and better at managing conflicting goals. This aligns with the idea that well-being acts as fuel for the engine of self-control.
βThe most important takeaway for the average person is to reconsider how they approach self-improvement,β Jia said. βThe common advice is often to βjust try harderβ or to focus on building discipline through sheer willpower. Our findings suggest a potentially more effective, and certainly more pleasant, alternative: prioritize your well-being to build your self-control.β
βInstead of viewing happiness as a reward you get after achieving your goals through discipline, think of well-being as the fuel that powers the engine of self-control. If you want to get better at resisting temptations, starting new projects, or sticking with good habits, a great first step is to invest in activities that make you feel happy, energetic, optimistic, and appreciative of life. Our research indicates that feeling well precedes functioning well.β
The studyβs strength lies in its use of a three-wave longitudinal design across two diverse cultural samples. But as with all research, there are some limitations. The statistical framework used relies on the assumption that the relationships between variables remain constant over time. It is also possible that unmeasured third variables, such as changes in sleep, stress, or social support, could influence both well-being and self-control simultaneously.
It is also important to note that the absence of a short-term effect does not mean self-control has no relationship with happiness. βA crucial caveat is that βabsence of evidence is not evidence of absence,'β Jia explained. βOur study failed to find a within-person causal effect of self-control on well-being, but this does not mean that self-control is unimportant for happiness altogether.
βItβs possible that having high self-control as a stable, long-term trait contributes to a personβs overall life satisfaction (a between-persons effect), even if short-term fluctuations in self-control donβt cause short-term fluctuations in well-being.β
βSo, the misinterpretation to avoid is thinking βself-control doesnβt matter for happiness.β A more accurate interpretation is that if you are looking for a positive change, our evidence suggests that boosting your well-being is a more direct and effective way to improve your self-control, rather than the other way around.β
Future research could explore the specific mechanisms that allow well-being to improve self-control. It may be that positive moods accelerate habit formation or enhance cognitive flexibility. Understanding these processes could lead to better interventions for people struggling with self-regulation.
βThe path to greater self-control doesnβt have to be a grim, effortful struggle,β Jia added. βInstead, it can be paved with positive experiences. By actively cultivating joy, engagement, and meaning in our lives, we are not just making ourselves feel better in the moment; we are also building the psychological resources we need to be more effective and successful in the future. It places the pursuit of well-being at the very center of personal growth.β
The study, βFeeling Well, Functioning Well: How Psychological Well-Being Predicts Later Self-Control, but Not the Other Way Around,β was authored by Shuna Shiann Khoo, Lile Jia, Ismaharif Ismail, Ying Li, Liangyu Xing, and Jolynn Pek.

The brain has its own waste disposal system β known as the glymphatic system β thatβs thought to be more active when we sleep.
But disrupted sleep might hinder this waste disposal system and slow the clearance of waste products or toxins from the brain. And researchers are proposing a build-up of these toxins due to lost sleep could increase someoneβs risk of dementia.
There is still some debate about how this glymphatic system works in humans, with most research so far in mice.
But it raises the possibility that better sleep might boost clearance of these toxins from the human brain and so reduce the risk of dementia.
Hereβs what we know so far about this emerging area of research.
All cells in the body create waste. Outside the brain, the lymphatic system carries this waste from the spaces between cells to the blood via a network of lymphatic vessels.
But the brain has no lymphatic vessels. And until about 12 years ago, how the brain clears its waste was a mystery. Thatβs when scientists discovered the βglymphatic systemβ and described how it βflushes outβ brain toxins.
Letβs start with cerebrospinal fluid, the fluid that surrounds the brain and spinal cord. This fluid flows in the areas surrounding the brainβs blood vessels. It then enters the spaces between the brain cells, collecting waste, then carries it out of the brain via large draining veins.
Scientists then showed in mice that this glymphatic system was most active β with increased flushing of waste products β during sleep.
One such waste product is amyloid beta (AΞ²) protein. AΞ² that accumulates in the brain can form clumps called plaques. These, along with tangles of tau protein found in neurons (brain cells), are a hallmark of Alzheimerβs disease, the most common type of dementia.
In humans and mice, studies have shown that levels of AΞ² detected in the cerebrospinal fluid increase when awake and then rapidly fall during sleep.
But more recently, another study (in mice) showed pretty much the opposite β suggesting the glymphatic system is more active in the daytime. Researchers are debating what might explain the findings.
So we still have some way to go before we can say exactly how the glymphatic system works β in mice or humans β to clear the brain of toxins that might otherwise increase the risk of dementia.
We know sleeping well is good for us, particularly our brain health. We are all aware of the short-term effects of sleep deprivation on our brainβs ability to function, and we know sleep helps improve memory.
In one experiment, a single night of complete sleep deprivation in healthy adults increased the amount of AΞ² in the hippocampus, an area of the brain implicated in Alzheimerβs disease. This suggests sleep can influence the clearance of AΞ² from the human brain, supporting the idea that the human glymphatic system is more active while we sleep.
This also raises the question of whether good sleep might lead to better clearance of toxins such as AΞ² from the brain, and so be a potential target to prevent dementia.
What is less clear is what long-term disrupted sleep, for instance if someone has a sleep disorder, means for the bodyβs ability to clear AΞ² from the brain.
Sleep apnoea is a common sleep disorder when someoneβs breathing stops multiple times as they sleep. This can lead to chronic (long-term) sleep deprivation, and reduced oxygen in the blood. Both may be implicated in the accumulation of toxins in the brain.
Sleep apnoea has also been linked with an increased risk of dementia. And we now know that after people are treated for sleep apnoea more AΞ² is cleared from the brain.
Insomnia is when someone has difficulty falling asleep and/or staying asleep. When this happens in the long term, thereβs also an increased risk of dementia. However, we donβt know the effect of treating insomnia on toxins associated with dementia.
So again, itβs still too early to say for sure that treating a sleep disorder reduces your risk of dementia because of reduced levels of toxins in the brain.
Collectively, these studies suggest enough good quality sleep is important for a healthy brain, and in particular for clearing toxins associated with dementia from the brain.
But we still donβt know if treating a sleep disorder or improving sleep more broadly affects the brainβs ability to remove toxins, and whether this reduces the risk of dementia. Itβs an area researchers, including us, are actively working on.
For instance, weβre investigating the concentration of AΞ² and tau measured in blood across the 24-hour sleep-wake cycle in people with sleep apnoea, on and off treatment, to better understand how sleep apnoea affects brain cleaning.
Researchers are also looking into the potential for treating insomnia with a class of drugs known as orexin receptor antagonists to see if this affects the clearance of AΞ² from the brain.
This is an emerging field and we donβt yet have all the answers about the link between disrupted sleep and dementia, or whether better sleep can boost the glymphatic system and so prevent cognitive decline.
So if you are concerned about your sleep or cognition, please see your doctor.![]()
Β
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Β

Is there a better approach?
Decades of assumptions could be wrong.
That's not even a worst-case scenario.
Going to school helps children learn how to read and solve math problems, but it also appears to upgrade the fundamental operating system of their brains. A new analysis suggests that the structured environment of formal education leads to improvements in executive functions, which are the cognitive skills required to control behavior and achieve goals. These findings were published in the Journal of Experimental Child Psychology.
To understand why this research matters, one must first understand what executive functions are. Psychologists use this term to describe a specific set of mental abilities that allow people to manage their thoughts and actions. These skills act like an air traffic control system for the brain. They help a person pay attention, switch focus between tasks, and remember instructions.
There are three main components to this system. The first is working memory, which is the ability to hold information in your mind and use it over a short period. The second is inhibitory control. This is the ability to ignore distractions and resist the urge to do something impulsive. The third is cognitive flexibility. This allows a person to shift their thinking when the rules change or when a new problem arises.
Researchers have known for a long time that these skills get better as children get older. A seven-year-old is almost always better at sitting still and following directions than a four-year-old. The difficult question for scientists has been determining what causes this change. It is hard to tell if children improve simply because their brains are biologically maturing or if the experience of going to school actually speeds up the process.
This is the question that Jamie Donenfeld and her colleagues sought to answer. Donenfeld is a researcher at the University of Massachusetts Boston. She worked alongside Mahita Mudundi, Erik Blaser, and Zsuzsa Kaldy, who are also affiliated with the Department of Psychology at the same university. The team wanted to isolate the specific impact of the classroom environment from the natural effects of aging.
To do this, the researchers relied on a clever quirk of the educational system known as the school entry cutoff date. In many school districts, a child must turn five by a specific date, such as September 1, to enter kindergarten. This creates a natural experiment.
Consider two children who are practically the same age. One was born on August 31, and the other was born on September 2. The child born in August enters kindergarten. The child born in September must wait another year. By comparing these two groups, scientists can look at children who are virtually identical in biological maturity but have vastly different experiences with formal schooling.
The research team did not conduct a single new experiment with a specific group of children. Instead, they performed a meta-analysis. This is a statistical method that allows scientists to combine the results of many previous studies to find a common trend. They searched through databases for studies published between 1995 and 2023.
They started with over 400 potential studies. They screened these records to find ones that met strict criteria. The studies had to compare children of similar ages who had different levels of schooling. They also had to use objective measures of executive function.
The team ultimately identified 12 studies that fit all their requirements. These studies included data from approximately 1,611 children. The participants ranged in age from about four and a half to nine years old. The studies covered various locations, including the United States, Germany, Israel, and Scotland.
By pooling the data from these different sources, the researchers calculated a standardized mean difference. This number represents the size of the βschooling effect.β The analysis revealed a small but consistent positive effect. The data showed that attending school does improve a childβs executive functions.
The improvement was not massive, but it was reliable. The researchers described the effect as modest. It suggests that the experience of school provides a unique boost to cognitive development that goes beyond just getting older.
The researchers also conducted a secondary analysis using the longitudinal studies in their set. These were studies that followed children over time. They compared two types of groups. The first group consisted of children who did not advance a grade level during the study period, such as those remaining in preschool. This group provided a baseline for how much executive function improves due to natural maturation alone.
The second group consisted of children who completed a grade, such as first grade, during the same timeframe. This group represented the combined effect of biological maturation plus the experience of schooling.
The results showed a clear difference. The children who experienced a year of schooling showed greater gains in executive functions than those who only grew a year older. The estimated effect size for the schooling group was higher than for the maturation-only group. This supports the idea that the classroom environment acts as a training ground for the brain.
It is important to consider why school has this effect. The authors argue that formal education places heavy demands on a child. Students must sit still for extended periods. They must listen to instructions from teachers. They have to wait their turn to speak. They must remember rules and complete tasks even when they are tired or bored.
This daily routine serves as an intense practice session for inhibitory control and working memory. The state of Massachusetts, for example, requires 900 hours of structured learning time per year. That is a massive amount of practice.
The authors compared this to commercial βbrain trainingβ games. Many companies sell video games that claim to improve cognitive skills. However, research has largely shown that these games do not work very well. Players get better at the specific game, but the skills do not transfer to real life.
The researchers suggest that school succeeds where these games fail because of the intensity and duration of the experience. A few hours of gaming cannot compare to hundreds of hours of managing oneβs behavior in a social classroom setting. The context of school is immersive. It requires children to use their executive functions in real-world situations to achieve social and academic goals.
There are limitations to this study that should be noted. The number of studies included in the final analysis was relatively small. Finding research that strictly followed the cutoff-date design is difficult. This means the total pool of participants was not as large as it is in some medical meta-analyses.
The studies also used a wide variety of tasks to measure executive functions. Some used memory games involving numbers. Others used tasks where children had to sort cards by changing rules. Some tested inhibitory control by asking children to touch their toes when told to touch their head.
This variety makes it harder to compare results perfectly across different papers. The educational systems in the different countries also vary. Kindergarten in Switzerland might focus more on play than kindergarten in the United States. This could influence how much βtrainingβ the children actually receive.
The authors also noted that they could not examine specific transitions in detail. It is possible that the jump from preschool to kindergarten has a bigger impact than the move from first to second grade. The current data did not allow them to break down the results by specific grade levels with high precision.
Future research is needed to understand which parts of schooling are the most effective. It might be the structured curriculum. It might be the social interaction with peers. It might be the relationship with the teacher. Understanding the specific mechanisms could help educators design classrooms that better support cognitive development.
The researchers also point out that the tests used in these studies are laboratory tasks. They are artificial by nature. Future studies should try to measure how children use these skills in real-world scenarios. We need to know if better scores on a memory test translate to better behavior on the playground or at home.
The study, βSchool changes minds: A meta-analysis shows that schooling modestly improves childrenβs executive functions,β was authored by Jamie Donenfeld, Mahita Mudundi, Erik Blaser, and Zsuzsa Kaldy.

It's not just a hallucinogen.
A comprehensive analysis of English-language literature published over the last century reveals distinct patterns in how race and gender intersect within written text. The findings suggest that Black women and Asian men have historically appeared less frequently in books compared to Black men and Asian women, a phenomenon that aligns with psychological theories regarding social invisibility.
The research also provides evidence that these representational trends are not static and appear to shift in response to major historical events. These findings were published in the journal Current Research in Ecological and Social Psychology.
Joanna Schug, an associate professor at William & Mary, led the research team. She collaborated with Monika Gosin from the University of California San Diego and Nicholas P. Alt from Occidental College to investigate these long-term cultural trends. The study aimed to apply a historical lens to psychological theories that have typically been tested in laboratory settings.
Scholars have previously developed the concept of gendered race theory to explain how society perceives different groups. This framework suggests that the racial category βBlackβ is often cognitively associated with masculinity. Conversely, the racial category βAsianβ is frequently associated with femininity.
These mental associations can lead to a phenomenon known as intersectional invisibility. This theory posits that individuals who do not fit the prototypical stereotypesβspecifically Black women and Asian menβare often overlooked or marginalized. Because they do not align with the dominant gendered stereotypes of their racial groups, they may become less visible in cultural representations.
Prior experiments have supported these theories by showing that people are more likely to forget statements made by Black women or Asian men compared to other groups. Schug and her colleagues sought to determine if this psychological bias extended to cultural artifacts. They investigated whether these patterns of invisibility could be quantified in millions of books published over a 120-year period.
To conduct this analysis, the researchers utilized the Google Books Ngram dataset. This massive digital archive contains word frequency data from over 15 million books published between 1900 and 2019. The team examined two specific collections within this dataset: a general corpus of English-language books and a specific corpus containing only fiction texts.
The investigators tracked the frequency of specific phrases, known as βngrams,β that combine racial and gender identifiers. They searched for terms such as βBlack woman,β βBlack man,β βAsian woman,β and βAsian man.β To ensure the search was comprehensive, they included various synonyms and historical terms relevant to different time periods.
For the category of Black individuals, the search included terms like βAfrican Americanβ and older designations that were common in the early 20th century. For Asian individuals, the researchers included specific ethnic groups such as Chinese, Japanese, Korean, and Vietnamese. They calculated the raw frequency of these terms to compare their prevalence in fiction versus nonfiction works.
The results from the first part of the study provided evidence supporting the existence of representational invisibility in literature. Throughout the majority of the 20th century, terms referring to Black men appeared more often than terms referring to Black women. This gap was present in both fiction and nonfiction texts.
Similarly, the analysis showed a consistent disparity in representations of Asian identities. References to Asian women generally outnumbered references to Asian men. This pattern persisted across the studied time period, although the gap was particularly pronounced in nonfiction books starting in the 1990s.
The researchers argue that these patterns reflect deep-seated historical stereotypes. For example, historical labor laws and immigration policies often restricted Asian men to domestic roles, which may have contributed to feminized stereotypes. In contrast, historical narratives surrounding Black identity have often focused on men, particularly in the context of labor and political struggle.
The study also included a comparison with White gender categories. The data showed that references to White men far exceeded references to White women. This finding aligns with the concept of androcentrism, where men are treated as the default representation of a group.
While the general patterns supported the theory of intersectional invisibility, the researchers observed a notable shift beginning in the late 20th century. In nonfiction books, references to Black women began to increase substantially around 1980. Eventually, the frequency of terms for Black women surpassed those for Black men in nonfiction texts.

To understand the drivers behind these shifts, the authors conducted a second study. They hypothesized that specific social movements might be influencing how often these groups were mentioned in print. They focused on the Civil Rights Movement and the Black Feminist movement.
The team identified key terms associated with these movements. For the Civil Rights Movement, they tracked phrases like βCivil Rights Movementβ and βBlack Power.β For the Black Feminist movement, they tracked terms such as βBlack feministβ and βwomanist.β
They then used statistical models to analyze the relationship between these movement-related terms and the frequency of race-gender categories over time. The analysis examined whether a rise in social movement terminology corresponded with a rise in the visibility of specific groups.
The findings indicated a strong link between the Civil Rights Movement and the representation of Black men. Increases in terms related to Civil Rights were positively associated with increases in references to Black men in both fiction and nonfiction. This suggests that the discourse of this era primarily elevated the visibility of Black men.
In contrast, the Civil Rights terminology did not show a significant positive association with references to Black women. This aligns with critiques from scholars like KimberlΓ© Crenshaw. Crenshaw has argued that antiracist efforts during that era often focused on the experiences of Black men, while feminist efforts often focused on White women.
However, the data revealed a different pattern regarding the Black Feminist movement. The rise in terms associated with Black Feminism was a significant predictor of increased references to Black women. This effect was particularly strong in nonfiction texts.
This suggests that the Black Feminist movement played a role in correcting the historical invisibility of Black women in literature. As scholars and activists began to produce more work centered on the experiences of Black women, the language in published books shifted to reflect this focus.
The study did observe some differences between fiction and nonfiction. For instance, while Black Feminism terms predicted more mentions of Black women in nonfiction, they were negatively associated with mentions of Black men in fiction. This indicates that different genres may respond to cultural shifts in distinct ways.
The researchers note that the patterns for Asian men and women remained relatively stable compared to the shifts seen for Black men and women. The representation of Asian men remained lower than that of Asian women throughout most of the period. The authors suggest that future research could investigate if specific Asian American social movements have had similar effects on representation.
But there are some limitations to to consider. The Google Books dataset, while vast, is not a perfect representation of all culture. It tends to overrepresent academic and scientific publications, which might skew the results toward scholarly discourse rather than everyday language.
Additionally, the study is correlational. This means that while the rise in social movement terms coincides with changes in representation, it does not definitively prove that the movements caused the changes. Other unmeasured societal factors could have contributed to these trends.
The researchers also point out the complexity of the term βAsianβ in their analysis. The study primarily utilized terms related to East Asian identities. This focus means the findings may not fully capture the experiences of South Asian or Southeast Asian groups.
Despite these limitations, the study offers new insights into how cultural stereotypes are preserved and challenged over time. It provides empirical evidence that the βinvisibilityβ of certain groups is not just a theoretical concept but a measurable phenomenon in the written record.
The findings also highlight the potential of social movements to alter widespread cultural narratives. The increase in references to Black women following the rise of Black Feminism suggests that concerted intellectual and political efforts can successfully challenge representational biases.
Future research could build on this work by using more advanced text analysis methods. Newer techniques could examine the context in which these words appear, rather than just their frequency. This would allow for a deeper understanding of the quality of representation, beyond just the quantity.
The study, βA historical psychology approach to gendered racial stereotypes: An examination of a multi-million book sample of 20th century texts,β was authored by Joanna Schug, Monika Gosin, and Nicholas P. Alt.

Recent analysis of federal health data suggests that the recreational use of LSD is associated with a lower likelihood of alcohol use disorder. This finding stands in contrast to the use of other psychedelic substances, which did not show a similar protective link in the past year. The results were published recently in the Journal of Psychoactive Drugs.
Alcohol use disorder affects millions of adults and stands as one of the most persistent public health challenges in the United States. The condition involves a pattern of alcohol consumption that leads to clinically detectable distress or impairment. Individuals with this disorder often find themselves unable to control their intake despite knowing it causes physical or social harm. Standard treatments exist, but relapse rates remain high. Consequently, medical researchers are exploring alternative therapeutic avenues.
In recent years, attention has shifted toward the potential utility of psychedelic compounds. Substances such as psilocybin and MDMA have shown promise in controlled clinical trials for treating various psychiatric conditions. However, there is a substantial distinction between administering a drug in a hospital with trained therapists and taking a drug recreationally. James M. Zech, a researcher at Florida State University, sought to investigate this difference. Zech collaborated with JΓ©rΓ©mie Richard from Johns Hopkins School of Medicine and Grant M. Jones from Harvard University.
The team aimed to determine if the therapeutic signals seen in small clinical trials would appear in the general population. They utilized data from the National Survey on Drug Use and Health. This government project recruits a representative group of American citizens to answer detailed questions about their lifestyle and health. The researchers pooled data collected from 2021 through 2023. The final dataset included responses from 139,524 adults.
To ensure accuracy, the investigators did not simply look at who used drugs and who drank alcohol. They employed statistical models designed to account for confounding factors. They adjusted their calculations for variables such as age, biological sex, income, and education level. They also controlled for the use of other substances, including tobacco and cannabis. This process helped them isolate the specific relationship between psychedelics and alcohol problems.
The researchers assessed whether participants met the diagnostic criteria for alcohol use disorder within the past year. They also looked at the severity of the disorder by counting the number of symptoms reported. These symptoms range from experiencing cravings to neglecting responsibilities due to drinking.
The analysis revealed a distinct association regarding lysergic acid diethylamide, better known as LSD. Adults who reported using LSD in the past year were significantly less likely to meet the criteria for alcohol use disorder. The adjusted odds ratio indicated a 30 percent reduction in likelihood compared to non-users. Among those who did have the disorder, LSD users reported approximately 15 percent fewer symptoms.
The study did not find the same pattern for other popular substances. The researchers analyzed the use of MDMA and ketamine over the same twelve-month period. Neither of these drugs showed a statistical association with the presence or absence of alcohol use disorder. This suggests that the potential protective effect observed with LSD might be specific to that compound or the context in which it is typically used.
A more complex picture emerged when the team examined lifetime usage histories. The survey asked participants if they had ever used certain drugs, even if they had not done so recently. Individuals who had used psilocybin or MDMA at any point in their lives were actually more likely to meet the criteria for alcohol use disorder in the past year. In contrast, lifetime use of DMT was linked to a lower probability of having the disorder.
These contradictory findings highlight the difficulty of interpreting observational data. The researchers propose several theories to explain why lifetime psilocybin use might track with higher alcohol problems while past-year LSD use tracks with lower ones. It is possible that individuals with existing substance use issues are more inclined to experiment with psilocybin.
Another possibility involves the nature of the psychedelic experience itself. While clinical trials optimize the setting to ensure a positive outcome, recreational use carries risks. The authors note that unsupervised trips can sometimes be distressing or psychologically destabilizing. If a person has a negative experience, they might increase their alcohol consumption as a way to cope with the resulting stress.
Conversely, the potential benefits of LSD could stem from psychological shifts often reported by users. Previous studies indicate that psychedelics can alter personality traits. Users often report increased βopennessβ and decreased βneuroticismβ after a profound experience. If LSD facilitates such changes more reliably in naturalistic settings, it could theoretically reduce the psychological drivers of heavy drinking.
These results contribute to a growing body of literature that often points in different directions. For example, a survey of Canadian adults previously found that people self-reported large reductions in alcohol use after taking psychedelics. In that study, respondents specifically cited psilocybin as the most effective agent for change. The discrepancy between that survey and the current findings underscores the difference between self-perception and objective diagnostic criteria.
Clinical research has also provided evidence for the efficacy of psilocybin, provided it is administered professionally. A small trial conducted in Denmark tested a single high dose of psilocybin on patients with severe alcohol use disorder. In that experiment, patients received psychological support before and after the session. The clinicians observed a reduction in heavy drinking days and cravings.
The contrast between the clinical success of psilocybin and the negative association found in the general population data is noteworthy. It suggests that the element of therapy and professional guidance may be essential for achieving therapeutic outcomes. Without the safety net of a clinical setting, the risks of using these powerful substances may outweigh the benefits for some individuals.
There are some limitations to the current study that affect how the results should be viewed. The analysis is cross-sectional, meaning it captures a snapshot in time rather than following people forward. As a result, the researchers cannot prove that LSD causes a reduction in drinking. It is equally possible that people who choose to use LSD simply have different lifestyle patterns that protect them from alcohol addiction.
The study also faced constraints regarding the data available. The federal survey only asked about past-year use for a subset of drugs. For psilocybin, the survey only asked about lifetime use. This prevented the researchers from seeing if recent psilocybin use might have shown a positive benefit similar to LSD. Additionally, the data relies on self-reporting. Participants may not always be truthful about their involvement with illegal substances or the extent of their alcohol consumption.
The researchers emphasize the need for longitudinal studies in the future. Tracking individuals over many years would clarify the order of events. It would show whether psychedelic use typically precedes a change in drinking behavior. The authors also suggest that future research should measure the dosage and frequency of use. Understanding whether a person took a substance once or heavily and repeatedly is necessary to fully understand the risks and benefits.
The study, βThe Relationship Between Psychedelic Use and Alcohol Use Disorder in a Nationally Representative Sample,β was authored by James M. Zech, JΓ©rΓ©mie Richard, and Grant M. Jones.

Recent trends in popular culture suggest that sexual behaviors involving physical force, such as choking or spanking, have moved from the fringes into the mainstream. A new study involving a nationally representative sample of adults provides evidence that these practices are widespread in the United States, particularly among younger generations. Published in the Archives of Sexual Behavior, the findings indicate that while many adults engage in these acts consensually, a significant portion of the population has also experienced them without permission.
The prevalence of βrough sexβ has appeared to increase over the last decade. Depictions of these behaviors have become common in television, music, and social media. This visibility may lead to the perception that such practices are a standard or expected part of sexual intimacy. While these acts can enhance pleasure and intimacy for many, public health professionals have raised questions about safety and consent.
Previous attempts to measure these behaviors have often faced methodological hurdles. Many earlier surveys relied on data that is now outdated or focused exclusively on college students, limiting the ability to apply findings to the general public. Other studies used non-probability samples, such as online opt-in panels, which may not accurately reflect the broader population. Additionally, standard public health surveys often focus on disease prevention and pregnancy, omitting specific questions about acts like choking or slapping.
Debby Herbenick, a professor at the Indiana University School of Public Health, led the new research. Herbenick and her colleagues sought to fill the gaps in existing literature by collecting current data from a diverse range of ages and backgrounds. Their objective was to provide precise estimates of how many Americans engage in these behaviors and to identify demographic factors associated with them.
To achieve this, the researchers analyzed data from the 2022 National Survey of Sexual Health and Behavior. This survey is a recurring project that gathers detailed information on the sexual lives of Americans. The team used the Ipsos KnowledgePanel to recruit participants. This panel utilizes address-based sampling methods to create a pool of respondents that is statistically representative of the United States non-institutionalized adult population.
The final sample consisted of 9,029 adults between the ages of 18 and 94. The survey presented participants with a list of ten specific sexual behaviors. These included hair pulling, biting, face slapping, genital slapping, light spanking, hard spanking, choking, punching, name-calling, and smothering. The researchers avoided using the potentially ambiguous term βrough sexβ in the questions. Instead, they asked about each specific act individually.
Participants reported their experiences in three distinct contexts. They indicated if they had performed these acts on a partner. They also indicated if a partner had done these acts to them with permission or consent. Finally, they reported if a partner had done these acts to them without permission or consent.
The results indicated that engagement in these behaviors is common. Approximately 48 percent of women and 61 percent of men reported having ever performed at least one of the listed behaviors on a partner. When it came to receiving these acts with consent, about 54 percent of women and 46 percent of men reported having at least one such experience.
Age emerged as a strong predictor of engagement. The researchers observed a substantial divide between adults under the age of 40 and those in older cohorts. Younger adults were significantly more likely to report both performing and receiving these behaviors. For instance, while choking a partner was rarely reported by men over the age of 50, it was a common experience for men in their 20s and 30s.
The types of behaviors reported varied in intensity. Biting and light spanking were among the most common activities reported by all groups. More intense behaviors, such as punching or smothering, were reported less frequently.
Gender patterns in the data generally aligned with traditional roles. Men were more likely to report being the ones to perform the acts, such as spanking or choking a partner. Conversely, women were more likely to report being on the receiving end of these behaviors. This suggests that even within practices considered βkinkyβ or alternative, mainstream participation often mirrors conventional active-male and passive-female scripts.
Transgender and gender nonbinary participants reported high rates of engagement across all categories. About 71 percent of these individuals reported ever performing at least one of the acts on a partner. Similarly, roughly 72 percent reported receiving at least one of the acts with consent.
One of the most concerning findings related to non-consensual experiences. The survey revealed that a substantial number of adults have been subjected to rough sex behaviors without their agreement. Approximately 20 percent of women reported that a partner had performed at least one of the ten behaviors on them without permission.
The rates of non-consensual experiences were also notable for men, with about 16 percent reporting such incidents. The risk was highest for transgender and gender nonbinary individuals. Approximately 35 percent of this group reported experiencing at least one of the behaviors without consent.
These findings align with and expand upon several lines of previous inquiry regarding rough sex. For example, a 2024 study by DΓΆring and colleagues surveyed a national sample of German adults using an online panel. They found a lifetime prevalence of rough sex involvement at 29 percent. Similar to the current U.S. study, the German researchers identified a steep age gradient. Younger participants were much more likely to engage in these acts than older cohorts.
The German study also mirrored the gendered nature of these interactions observed in the U.S. data. DΓΆringβs team found that men were significantly more likely to take an active role, while women were more likely to take a passive role. This consistency across Western nations suggests that the rise of rough sex is occurring within the boundaries of traditional gender expectations rather than subverting them.
Earlier research involving U.S. college students also provides context for the current findings. A 2021 study by Herbenick and colleagues found that nearly 80 percent of sexually active undergraduates had engaged in rough sex.
The most common behaviors identified in that probability sampleβchoking, hair pulling, and spankingβmatch the most prevalent behaviors in the new national adult study. The extremely high rates among college students align with the age-related trends seen in the adult data. It appears that emerging adults are the primary demographic driving these statistics.
Research from an evolutionary psychology perspective offers potential explanations for why these behaviors are occurring. Studies by Burch and Salmon have suggested that consensual rough sex is often driven by a desire for novelty rather than aggression. Their work with undergraduates indicated that people who consume pornography are more likely to seek out these novel experiences. They also found that men were more likely to initiate rough sex in response to feelings of jealousy.
Burch and Salmonβs findings framed these behaviors as largely recreational and resulting in little physical injury. The current study complicates that narrative. While many respondents reported consensual engagement, the high rates of non-consensual experiences indicate that these behaviors are not always harmless play. The prevalence of non-consensual choking and slapping suggests a darker side to the normalization of rough sex that novelty-seeking theories may not fully address.
The researchers pointed out several limitations to their study. The list of ten behaviors may not capture the full spectrum of what individuals consider to be rough sex. Additionally, the survey did not measure the βwantednessβ of the acts. It is possible for an act to be consensual but not necessarily desired or enjoyed, and the study did not make this distinction.
The study also grouped bisexual and pansexual individuals together for analysis. This decision was made due to sample sizes but may obscure unique experiences within these distinct identities. Furthermore, the reliance on self-reported data means that memory recall could influence the accuracy of the lifetime prevalence estimates.
Future research aims to explore the nuances of consent in these scenarios. The researchers suggest investigating how partners communicate boundaries regarding specific acts like choking or slapping. Understanding the context in which non-consensual acts occurβwhether as part of an otherwise consensual encounter or as distinct assaultsβis a priority for public health.
The study, βPrevalence and Demographic Correlates of βRough Sexβ Behaviors: Findings from a U.S. Nationally Representative Survey of Adults Ages 18β94 Years,β was authored by Debby Herbenick, Tsungβchieh Fu, Xiwei Chen, Sumayyah Ali, Ivanka SimiΔ StanojeviΔ, Devon J. Hensel, Paul J. Wright, ZoΓ« D. Peterson, Jaroslaw Harezlak, and J. Dennis Fortenberry.

New research published in the Personality and Social Psychology Bulletin reveals a psychological split within the political left regarding perceptions of in-group dissenters. The study indicates that self-identified Progressives and Traditional Liberals generate fundamentally different mental images of author J.K. Rowling based on her views regarding gender identity. While Progressives conceptualize Rowling as appearing cold and right-wing, Traditional Liberals visualize her in a warm and positive light.
Political psychology has historically focused on the ideological conflict between the Left and the Right. Scholars have frequently characterized right-wing individuals as more prone to rigidity and hostility toward out-groups. However, recent academic inquiries have shifted focus to the increasing fragmentation within the left-wing itself. This internal division is often categorized into two distinct subgroups: Progressives and Traditional Liberals.
Elena A. Magazin, Geoffrey Haddock, and Travis Proulx from Cardiff University conducted this research to investigate how these two groups perceive ideological dissenters from within their own ranks. The researchers utilized the Progressive Values Scale (PVS) to distinguish between the groups.
This scale identifies Progressives as those who emphasize mandated diversity, concern over cultural appropriation, and the public censure of offensive views. In contrast, Traditional Liberals tend to favor free expression and gradual institutional change over activist approaches.
The primary objective was to determine if the tendency to derogateβor negatively perceiveβothers extends to members of oneβs own political group who hold controversial views. J.K. Rowling served as the focal point for this investigation.
Rowling is a prominent figure who has historically supported left-wing causes but has recently expressed βgender criticalβ views that conflict with the βgender self-identificationβ stance held by many on the Left. The researchers sought to visualize how these political orientations shape the mental representations of such a figure.
The researchers employed a technique known as reverse correlation to capture these internal mental images. This method allows scientists to visualize a participantβs internal representation of a person or group without asking them to draw or describe features explicitly. In the first study, the team recruited 82 left-wing university students in the United Kingdom to act as βgenerators.β
During the image generation phase, participants viewed pairs of faces derived from a neutral base image overlaid with random visual noise. For each pair, they selected the face that best resembled their mental image of J.K. Rowling. By averaging the selected images across hundreds of trials, the researchers created composite βclassification imagesβ representing the average visualization of Rowling for Progressives and Traditional Liberals respectively.
A separate group of 178 undergraduates then served as βraters.β These participants evaluated the resulting composite images on various character traits, such as warmth, competence, morality, and femininity. The raters were unaware of how the images were generated or which political group created them.
The results from Study 1 provided evidence of a stark contrast in perception. The image of Rowling generated by Progressives was rated as cold, incompetent, immoral, and relatively masculine. Raters also perceived this face as appearing βright-wingβ and prejudiced.
On the other hand, the image generated by Traditional Liberals was evaluated positively across these dimensions. It appeared warm, competent, feminine, and distinctly left-wing. This suggests that while Progressives mentally penalized the dissenter, Traditional Liberals maintained a flattering perception of her.

To ensure these findings were not limited to a specific demographic or location, the researchers conducted a second study with a more diverse sample. Study 2 involved 382 adults from the United States. This experiment aimed to replicate the findings and expand upon them by including abstract targets alongside concrete ones.
Participants were asked to generate images for four different categories. These included specific public figures, such as J.K. Rowling (representing gender critical views) and Lady Gaga (representing gender self-identification views). They also generated images for generalized, abstract descriptions of a βfellow left-wingerβ who held either gender critical or self-identification beliefs.
Following the generation phase, 301 distinct participants rated the eight resulting composite images. The findings from the second study reinforced the patterns observed in the first. In general, faces representing gender critical views were rated more negatively than those representing self-identification views. This aligns with the general left-wing preference for the self-identification model.
However, the degree of negativity varied by generator type. Progressives consistently generated gender critical faces that were evaluated more harshly than those generated by Traditional Liberals. This held true for both the abstract descriptions and the specific example of J.K. Rowling.
A specific divergence occurred regarding the concrete representation of Rowling. Consistent with the UK study, US Progressives generated a negative image of the author. In contrast, US Traditional Liberals generated an image that raters viewed as warm, competent, and moral. This occurred even though Traditional Liberals generated a negative image for the abstract concept of a gender critical person.
This discrepancy suggests a nuanced psychological process for Traditional Liberals. While they may disagree with the abstract views Rowling holds, their mental representation of her as an individual remains protected by a βbenevolent exterior.β They appear to separate the person from the specific ideological disagreement in a way that Progressives do not.
The researchers also noted an unexpected pattern regarding gender perception. In both studies, the images of Rowling generated by Progressives were rated as looking less feminine and more masculine than those generated by Traditional Liberals. This finding implies that the devaluation of a target may involve stripping away gender-congruent features.
There are limitations to this research that context helps clarify. The first study relied heavily on a student population which was predominantly female and white. While the second study expanded the demographic range, both studies focused exclusively on the issue of gender identity. It remains unclear if this pattern of intra-left derogation would apply to other contentious topics, such as economic policy or foreign affairs.
Future research could explore these boundaries by using different targets of dissent. It would be valuable to investigate whether these visual biases persist if a dissenter apologizes or recants their views. Additionally, further study is needed to understand the βmasculinizationβ effect observed in the Progressive-generated images.
These findings provide evidence that the political left is not a monolith regarding social cognition. The distinction between Progressives and Traditional Liberals involves more than just policy disagreements. It appears to involve fundamental differences in how they visualize and socially evaluate those who deviate from group norms.
The study, βThe Face of Left-Wing Dissent: Progressives and Traditional Liberals Generate Divergently Negative and Positive Representations of J.K. Rowling,β was authored by Elena A. Magazin, Geoffrey Haddock, and Travis Proulx.

A study comparing musicians with non-musicians across different age groups in India found that the ability to perceive speech in a noisy environment is lower in older participants, indicating that it declines with age. However, these differences between groups of different ages were more pronounced in non-musicians, indicating that age-related cognitive decline in this ability might be slower in musicians. The research was published in the Journal of Otology.
Age-related hearing loss, also called presbycusis, is a gradual decline in the ability to hear high-frequency sounds as people get older. It typically results from cumulative damage to the delicate hair cells in the inner ear that are responsible for converting sound waves into neural signals. Genetic predisposition, lifetime noise exposure, cardiovascular health, and metabolic conditions such as diabetes all influence how quickly it develops.
The decline often begins subtly, making it harder to understand speech in noisy environments or to distinguish similar consonants. Over time, people may feel that others are mumbling, need to increase the volume on devices, or struggle with group conversations. Because the change is slow, many individuals do not recognize the extent of their hearing loss until it becomes functionally limiting.
Study authors Kruthika S. and Ajith Kumar Uppunda wanted to explore how the ability to perceive speech in a noisy environmentβi.e., speech perception in noise (SPiN) abilitiesβchanges with age. They noted that musicians with lifelong musical training often exhibit a noticeable advantage in comprehending speech in noise.
However, prior research has been inconsistent; while some studies show clear benefits, others (such as a 2014 study by Ruggles et al.) found no differences when comparing young musicians to non-musicians. With this in mind, the study authors set out to explore how musicians and non-musicians of different ages differ in their speech perception in noise abilities.
Study participants were 75 musicians and 75 non-musicians. They were divided into five age groups: 10β19 years, 20β29 years, 30β39 years, 40β49 years, and 50β59 years, with each age group consisting of 15 musicians and 15 non-musicians.
First, study participants were screened to ensure they had normal hearing thresholds and outer hair cell function, ruling out standard clinical hearing loss. After that, they underwent an assessment of speech perception in noise using the Kannada Sentence Identification Test.
Results showed no differences between musicians and non-musicians in their basic abilities to hear tones or in the functioning of the cochleaβs outer hair cells. However, musicians performed better than non-musicians on the speech-in-noise tasks across all age groups. As expected, the ability to perceive speech in noise was lower in older participants, but this decline was faster in non-musicians than in musicians. Non-musicians began to show significant deterioration in the 40β49 age range, while musicians maintained their performance levels until the 50β59 age range.
βMusic training can significantly delay or lessen the degenerative consequences of the aging process on SPiN [speech perception in noise]. Furthermore, the current study found that music training increases SPiN capacities in people of different ages. Thus, musical activities, if incorporated into a comprehensive rehabilitation strategy in aging individuals, may promote healthy aging,β the study authors concluded.
The study contributes to the scientific understanding of age-related changes in auditory processing. However, it should be noted that this was not a longitudinal study, but a study examining individuals of different ages at the same time. Because of this, it is not possible to know for sure whether the observed effects are truly effects of aging or differences between generations of people.
The paper, βNon-Musicians Experience Early Aging in Speech Perception in Noise Abilities Compared to Musicians,β was authored by Kruthika S. and Ajith Kumar Uppunda.

*Stares intently*











Lake Manly returns.
A significant clue.




A new study published in Biopsychosocial Science and Medicine suggests that a fatherβs psychological resilience may play a significant role in the biological health of his pregnant partner and the duration of her pregnancy. The research indicates that for married couples, a fatherβs internal strengths are linked to lower systemic inflammation in the mother, which in turn predicts a longer gestational length.
Premature birth and low birth weight are significant public health concerns that can lead to long-term developmental challenges for children. Infants born too early or too small face increased risks for health problems such as hypertension, diabetes, and difficulties with emotional regulation later in life.
Medical professionals understand that high levels of inflammation in a motherβs body during pregnancy can increase the risk of these adverse birth outcomes. While biological changes are normal during gestation, excessive inflammation can disrupt the delicate environment required for fetal development.
Past scientific inquiries have largely focused on identifying risk factors, such as socioeconomic disadvantage and chronic stress, that drive this inflammation. Less attention has been paid to positive psychological factors that might act as a buffer against these risks.
The concept of βresilience resourcesβ refers to a safety net of psychological strengths that allow individuals to adapt successfully in the face of challenges. These resources typically include optimism, self-esteem, a sense of mastery over oneβs life, and social support.
The current study sought to determine if these resilience resources could protect against inflammation during pregnancy. Most prior work in this area has focused solely on the pregnant mother. This leaves a gap in understanding how a fatherβs psychological state might influence the pregnancyβs progression.
βWeβve known for quite some time that adverse birth outcomes, like preterm delivery, can have long-term consequences for the health of the child. We have also learnt about psychological and biological factors in pregnant people, like stress and excess inflammation, which can raise the risk for outcomes like preterm delivery,β said study author Kavya Swaminathan, a doctoral student at UC Merced.
βHowever, we found that relatively little was known about whether psychological factors, social support, optimism, self-esteem, and mastery (i.e., resilience resources) could offer protective benefits. Relatedly, we recognized that there was limited research examining the role of both parents in protecting against adverse birth outcomes. To fill all these gaps in the literature, we decided to test whether resilience resources in the parents predicted lower inflammation in the mother and thus lower the risk for preterm delivery.β
The research team analyzed data from the Community Child Health Network. This was a large, prospective study focusing on families from diverse backgrounds across five sites in the United States. The sites included Los Angeles, Washington D.C., Baltimore, Lake County in Illinois, and rural eastern North Carolina. The study specifically recruited families from communities with high proportions of residents living at or below the federal poverty line.
The researchers focused on a final sample of 217 couples who provided data during a subsequent pregnancy following the birth of an initial child. The participants included mothers and fathers who identified as Black, Hispanic, and White. The team assessed resilience resources using four validated psychological surveys.
Dispositional optimism was measured using the Life Orientation Test, which asks individuals about their expectations for the future. Self-esteem was evaluated using the Rosenberg Self-Esteem Scale to gauge feelings of self-worth. Mastery, or the sense of control over oneβs life, was assessed with a scale asking participants if they felt they could achieve their goals. Finally, perceived social support was measured by asking participants if they had people available to help them if needed.
To measure physiological inflammation, the team collected biological samples from the mothers. They utilized dried blood spots taken from a finger prick during the second and third trimesters of pregnancy. These samples were analyzed for C-Reactive Protein. This protein is a substance produced by the liver in response to inflammation. High levels of this protein are often used as a marker for systemic inflammation in the body.
The researchers utilized a statistical method known as structural equation modeling to analyze the relationships between these variables. They combined the four psychological measures into a single βresilience resourceβ factor for each parent. They then tested whether these factors predicted the motherβs levels of C-Reactive Protein and, subsequently, the babyβs birth weight and gestational age.
The data revealed a specific pathway of influence originating from the fathers. Higher levels of resilience resources in fathers were associated with lower levels of C-Reactive Protein in mothers during pregnancy. In turn, lower levels of this inflammatory marker predicted a longer gestational length. This suggests that a fatherβs psychological stability may dampen biological stress responses in his partner.
This chain of associations was not uniform across all participants in the study. The link between paternal resilience, maternal inflammation, and pregnancy length was statistically significant only among married couples. It was not observed in couples who were cohabiting but unmarried. The effect was also absent in parents who were neither married nor living together.
βOur findings essentially suggest that in married couples, a fatherβs psychological strengths, his resilience, are not only relevant to his well-being, but can also impact the health of his pregnant partner and unborn child,β Swaminathan told PsyPost. βThus, as we try to support the pregnant people in our lives, it might also be useful to try to bolster resilience in the father, who can, in turn, help buffer adverse health outcomes in his partner.β
The researchers did not find evidence that the motherβs own resilience resources directly lowered her inflammation or influenced birth outcomes in this specific statistical model. While maternal and paternal resilience scores were correlatedβmeaning resilient mothers tended to have resilient partnersβthe direct benefit to gestational length appeared to flow through the fatherβs influence on maternal inflammation. Additionally, the study did not find a significant link between these factors and infant birth weight, only gestational length.
βAt the outset, we were interested in the protective effects of both parentsβ resilience resources on adverse birth outcomes,β Swaminathan said. βWe were surprised to find that although paternal resilience resources seemed to matter for inflammation, and thereby, gestational length, maternal resources did not. This, to us, suggested that perhaps maternal resources offer protection in different ways that we did not test in this study.β
The researchers propose several theoretical reasons for these observations. Committed relationships often involve a process called coregulation. This occurs when partnersβ physiological and emotional states become linked to one another. A resilient father may be better equipped to provide tangible support, such as assisting with daily tasks or encouraging adherence to medical advice. This support can reduce the motherβs overall stress load.
Reduced stress typically results in a calmer immune system and lower production of inflammatory proteins. The βself-expansion theoryβ of love also offers a potential explanation. This theory suggests that in close relationships, individuals include their partnerβs resources and identity into their own sense of self. A mother may psychologically benefit from her partnerβs optimism and sense of mastery, effectively βborrowingβ his resilience to buffer her own stress response.
The specificity of the finding to married couples warrants further consideration. Marriage often implies a higher level of long-term commitment and possibly greater time spent together compared to other relationship structures. This increased proximity and commitment might facilitate stronger coregulation and more consistent resource sharing. Married fathers in this sample also reported higher average levels of resilience resources than unmarried fathers, which could contribute to the stronger effect.
The study has certain limitations that affect how the results should be interpreted. The research design was observational rather than experimental. This means it cannot definitively prove that the fatherβs resilience caused the changes in the motherβs biology. It is possible that other unmeasured variables influenced the results.
Future research is needed to understand why the protective effect was specific to married couples in this dataset. Scientists should investigate whether the quality of the relationship or the amount of time spent together explains the difference. It would also be beneficial to examine other biological markers beyond inflammation. Cortisol, a stress hormone, might be another pathway through which resilience influences pregnancy.
The study, βParental resilience resources and gestational length: A test of prenatal maternal inflammatory mediation,β was authored by Kavya Swaminathan, Christine Guardino, Haiyan Liu, Christine Dunkel Schetter, and Jennifer Hahn-Holbrook.

Long before there were flowers.
Germans have a word for this kind of thing.
Tylos is one weird world.
It's not unprecedented.
Itβs important to get help.
Top-down, commanding leadership is frequently viewed with skepticism in the modern business world. Management experts typically champion collaborative environments where employees feel free to share ideas without fear of retribution. A new study challenges the universality of this view. The findings suggest that in family-owned businesses, a strict, authoritarian leadership style can actually boost innovation.
This positive effect is particularly strong when family members feel a deep emotional connection to the company and when the business operates in an emerging economy. The research was published in the Journal of Small Business Management.
Family businesses face a unique set of challenges compared to their non-family counterparts. They must balance professional goals with personal relationships. Previous research into how these firms innovate has produced conflicting results. Some observers argue that family firms are too conservative and risk-averse to innovate effectively. Others contend that their long-term focus allows them to be more efficient with resources.
Chelsea Sherlock from Mississippi State University led the research team. Her co-authors included David R. Marshall, Clay Dibrell, and Eric Clinton. The team sought to resolve existing debates by looking at leadership styles. They specifically examined authoritarian leadership. This style is characterized by a leader who exerts absolute control over decisions and demands unquestioning obedience from subordinates.
In a general corporate setting, such heavy-handed management often crushes creativity. Employees may feel stifled or resentful. Sherlock and her colleagues proposed that family firms operate under a different psychological contract. In these organizations, the leader is often a matriarch or patriarch. Their authority is derived not just from a job title but from their position within the family unit.
The researchers hypothesized that this unique context changes how leadership impacts innovation. Innovation requires the rapid mobilization of resources. It often demands quick, decisive action. An authoritarian leader can cut through bureaucratic red tape. They can allocate funds and personnel without engaging in lengthy debates. The team believed this efficiency could drive new product development and service improvements.
To test this theory, the researchers utilized data from the Successful Transgenerational Entrepreneurship Project (STEP). This is a global survey of family business leaders. The final sample included 1,267 family firms from 56 different countries. The businesses were small to medium-sized enterprises with fewer than 500 employees. The study covered a diverse range of nations, separating them into emerging economies and advanced economies.
The survey asked CEOs to rate their firmβs innovativeness. Questions focused on their emphasis on research and development and their history of introducing new product lines. They also rated the level of authoritarian leadership within the firm. These questions assessed how much the leader retained decision-making authority and expected strict compliance.
A third key variable was emotional attachment. The researchers measured how strongly family members identified with the business. This concept reflects a sense of psychological ownership. In firms with high emotional attachment, the business is not just a source of income. It is a central part of the familyβs identity and legacy.
The analysis revealed a positive relationship between authoritarian leadership and firm innovativeness. Contrary to popular management theories that favor flat hierarchies, the data showed that strict family leaders often drove their companies to be more innovative. The researchers suggest this is because authoritarian leaders in family firms are deeply committed to the businessβs survival. They possess the power to force the organization to adapt and evolve.
This relationship was not uniform across all companies. The study found that emotional attachment played a vital moderating role. The positive effect of authoritarian leadership was significantly stronger in firms where the family felt a deep emotional bond.
When family members are emotionally invested, they are more likely to trust the leaderβs intentions. They view the leaderβs strict commands as necessary for protecting the family legacy. This trust reduces resistance. Family employees interpret top-down directives as focused decision-making rather than oppression. This alignment allows the firm to move quickly and cohesively toward innovative goals.
Conversely, in firms where emotional attachment was low, the benefits of authoritarian leadership were less apparent. Without that emotional buffer, strict control is more likely to breed resentment. If the family does not care deeply about the business, they may view an authoritarian leader as a tyrant rather than a guardian. This friction can stall progress and hinder the creative process.
The researchers also investigated how the economic environment influenced these dynamics. They distinguished between advanced economies, such as Germany and the United States, and emerging economies, such as Brazil and China. Emerging economies often lack robust institutional support structures. In these environments, the rule of law may be weaker, and resources may be scarcer.
The study found a specific βthree-way interactionβ between leadership, emotion, and economy. The combination of authoritarian leadership and high emotional attachment was most effective for innovation in emerging economies. In these unpredictable markets, a strong hand at the helm is often necessary to navigate external chaos.
In an emerging economy, a family firm cannot always rely on external institutions for stability. They must rely on themselves. A strict leader provides direction. When that leadership is backed by a family united by strong emotional ties, the firm becomes a resilient, innovative unit. The family accepts the hierarchy because it ensures their collective survival and prosperity.
The results were different for firms in advanced economies with low emotional attachment. In countries with stable markets and strong institutions, the need for a βstrongmanβ leader is less pronounced. If a family in an advanced economy lacks an emotional connection to the business, an authoritarian leader may actually hurt innovation. The rigidity of the leadership style conflicts with the cultural norms of autonomy common in these regions.
These findings suggest that there is no βone size fits allβ approach to leading a family business. The effectiveness of a leadership style depends heavily on the internal culture of the family and the external economic reality. What works for a tight-knit family business in an emerging market might fail for a disconnected family firm in a developed nation.
Sherlock and her team noted several caveats to their work. The study relied on cross-sectional data. This means it captured a snapshot of these firms at a single point in time. It is impossible to definitively prove that authoritarian leadership caused the innovation. It is possible that innovative firms simply tend to adopt stricter leadership structures to manage their growth.
Additionally, the data relied on self-reports from CEOs. While this is common in management research, it introduces the possibility of bias. Leaders may perceive themselves or their firms more favorably than an objective observer would. The study also focused on small and medium-sized firms. The dynamics in massive, publicly traded family conglomerates could be entirely different.
The authors recommend that future research look at these relationships over time. A longitudinal study could track how changes in leadership style affect innovation rates in subsequent years. They also suggest exploring other leadership styles, such as servant leadership or participative leadership, to see how they interact with family dynamics.
This research offers a practical message for family business owners. It indicates that consolidating power is not inherently bad for business growth. However, this authority must be exercised in a way that resonates with the family. Leaders who wish to drive innovation through strict control must ensure they also cultivate the familyβs emotional bond to the firm. Without that emotional buy-in, the strategy is likely to fail.
The study, βThe bright side of authoritarian leadership in family firms: An emotional attachment perspective on innovativeness,β was authored by Chelsea Sherlock, David R. Marshall, Clay Dibrell, and Eric Clinton.

Take note, ChatGPT.
The underlying causes of sexual difficulties may differ between men and women who experience symptoms of eating disorders, according to new research. While depression appears to be the primary driver of sexual challenges among women with these symptoms, eating disorder behaviors themselves play a more direct role for men. These findings were published in the International Journal of Sexual Health.
Sexual functioning is a fundamental aspect of human health and quality of life. It encompasses desire, arousal, and the ability to achieve orgasm. Problems in these areas can lead to lower psychological well-being and relationship dissatisfaction.
Previous research has established a clear link between eating disorders and sexual dysfunction. Individuals struggling with disordered eating often report higher rates of sexual dissatisfaction and physiological difficulties. This connection makes intuitive sense given that eating disorders involve severe disturbances in body image and physical health.
Hormonal imbalances caused by malnutrition can physically impede sexual response. Simultaneously, psychological factors such as body shame and anxiety about appearance can create mental barriers to intimacy. However, the exact nature of this relationship remains a subject of scientific inquiry.
A complicating factor is the presence of other mental health conditions. Anxiety and depression are highly common among people with eating disorders. These conditions are also well-known causes of sexual dysfunction on their own.
It has been difficult for researchers to determine if sexual problems are caused specifically by the eating disorder or by co-occurring depression and anxiety. Additionally, the vast majority of research on this topic has focused on women. There is a lack of data regarding how these dynamics play out in men.
To address these gaps, a team of researchers led by Maegan B. Nation undertook a comprehensive investigation. Nation is affiliated with the Department of Psychology at the University of Nevada Las Vegas. The team aimed to disentangle the effects of eating pathology from the effects of general distress.
The researchers sought to understand if eating disorder symptoms predict sexual problems when the influence of anxiety and depression is mathematically removed. They also aimed to compare these patterns across genders. This approach allows for a more precise understanding of which symptoms should be targeted in treatment.
The study recruited a large sample of undergraduate students from two public universities in the United States. The final analysis included 1,488 cisgender women and 646 cisgender men. Cisgender refers to individuals whose gender identity matches the sex they were assigned at birth.
Participants completed a series of online questionnaires. To assess eating disorder symptoms, the researchers used the Eating Disorder Examination Questionnaire. This tool measures behaviors such as dietary restraint and concerns regarding body shape and weight.
To evaluate sexual health, the team utilized the Medical Outcomes Study Sexual Functioning Scale. This measure asks participants to rate the severity of various problems. These issues include a lack of sexual interest, difficulty becoming aroused, inability to relax during sex, and difficulty reaching orgasm.
The researchers also administered a standard assessment for anxiety and depression. This allowed them to control for these variables in their statistical models. By doing so, they could isolate the unique contribution of eating disorder symptoms to sexual functioning.
The results revealed distinct patterns for men and women. Among the female participants, sexual functioning problems were quite common. Approximately 73 percent of women reported some level of difficulty.
The most frequent complaints among women were difficulty reaching orgasm and an inability to relax and enjoy sex. When the researchers ran their statistical models, they found an association between eating disorder symptoms and sexual problems.
However, once the researchers adjusted for anxiety and depression, the picture changed. For women, the direct link between eating disorder symptoms and sexual dysfunction became very weak. The effect sizes were small enough that they might not be clinically meaningful.
Instead, depression symptoms emerged as the stronger predictor of sexual difficulties in women. This suggests that the sexual problems often seen in women with disordered eating may actually be a byproduct of depressive symptoms. The eating disorder itself may not be the primary culprit for the sexual dysfunction.
The findings for men told a different story. About half of the male participants reported sexual functioning problems. The most common issues for men were a lack of sexual interest and an inability to relax.
For men, eating disorder symptoms continued to predict sexual dysfunction even after controlling for anxiety and depression. While the effect was small, it remained statistically relevant. This implies that for men, there is a unique pathway between disordered eating and sexual health that is independent of general mood.
The authors propose several explanations for this gender disparity. One possibility involves the drive for muscularity. Men with body image issues often strive for a hyper-muscular physique rather than thinness.
This specific drive might influence sexual self-esteem and functioning in ways that differ from the drive for thinness typically seen in women. It is also possible that men experience unique sociocultural pressures regarding sexual performance and body image. These pressures could interact with eating pathology to disrupt sexual function.
The results for women align with existing theories about the heavy impact of depression on libido and arousal. It reinforces the idea that treating depression could alleviate sexual side effects in women with eating disorders.
For men, the results suggest that clinicians should look specifically at eating behaviors and body image cognitions. Addressing depression alone might not fully resolve sexual issues for male patients.
The study also examined sexual attraction as a variable. The researchers found that sexual orientation was linked to different levels of functioning. Men who reported attraction to the same gender or multiple genders reported higher levels of sexual problems compared to heterosexual men.
Conversely, women who were exclusively attracted to women reported fewer sexual functioning problems than those attracted to men. This adds nuance to the understanding of how sexual orientation interacts with sexual health.
There are limitations to this study that warrant consideration. The sample consisted of undergraduate students rather than a clinical population. People with diagnosed, severe eating disorders might show different patterns.
The study was also cross-sectional. This means the data represents a single snapshot in time. Researchers cannot definitively say that one factor causes another, only that they are related.
It is possible that the relationship is bidirectional. Sexual problems could contribute to body dissatisfaction, or vice versa. Longitudinal research, which follows participants over time, would be needed to establish causality.
The researchers also noted that the study focused on cisgender individuals. The experiences of transgender and gender-diverse individuals were not analyzed due to sample size constraints. Given that gender-diverse people often face higher rates of eating disorders, this is an area for future investigation.
Despite these limitations, the study offers new insights. It challenges the assumption that the relationship between eating disorders and sex is the same for everyone. It highlights the importance of considering gender when assessing and treating these co-occurring issues.
Maegan Nation and her colleagues suggest that screening for sexual functioning problems should be a routine part of mental health care. For women, this might involve a closer look at depressive symptoms. For men, it might require a specific focus on body image and eating behaviors.
Future research should aim to replicate these findings in clinical settings. Studies involving older adults or community samples would also be beneficial. Understanding the mechanisms behind these associations could lead to more effective interventions.
This research underscores the complexity of human sexuality and its relationship to mental health. It serves as a reminder that broad assumptions often fail to capture individual experiences. By breaking down these associations by gender and accounting for mood disorders, scientists can develop more targeted treatments.
The study, βSexual Functioning and Eating Disorder Symptoms: Examining the Role of Gender and Internalizing Symptoms in an Undergraduate Population,β was authored by Maegan B. Nation, Shane W. Kraus, Melanie Garcia, Nicholas C. Borgogna, and Kara A. Christensen Pacella.

A recent study suggests that participation in online extremist communities may be driven by the search for basic human psychological needs. This research, published in the journal Social Psychological and Personality Science, found that users whose posts reflected a sense of agency and capability were more active and stayed in these groups for longer periods. The findings provide evidence that extremist environments might serve as a space where individuals attempt to satisfy fundamental desires for personal growth and social connection.
The rise of far-right extremist movements has led to an increase in religious and ethnic violence across the globe. Researchers have noted that these ideologies are often spread through social media and private chatrooms that allow for easy communication and organization. Despite years of study, the exact reasons why individuals are drawn to these digital spaces remain only partially understood.
Jeremy J. J. Rappel and his colleagues at McGill University conducted this research to see if established theories of human motivation could explain extremist behavior. They focused on basic psychological needs theory, which is a well-supported framework in psychology. This theory suggests that all humans have three primary needs: autonomy, competence, and relatedness.
Autonomy refers to the need to feel that oneβs actions and thoughts are authentic and self-chosen. Competence is the desire to feel capable and effective in achieving goals or performing tasks. Relatedness is the need to feel a sense of belonging and to have meaningful connections with other people.
The researchers proposed that extremist groups might appeal to people because they offer a way to satisfy these needs. A person who feels powerless or lonely in their daily life might turn to a digital community that promises a sense of empowerment or camaraderie. While these groups are often outside of social norms, the psychological drive to join them might be the same drive that leads others to join sports teams or civic organizations.
To test these ideas, the research team analyzed a massive dataset of leaked conversations from the messaging platform Discord. The data came from a public database of over 200 extremist chatrooms that included fascists, white supremacists, and conspiracy theorists. The final sample was immense, consisting of approximately 20 million posts written by more than 86,000 individual users.
Because the data was so large, the researchers used a specialized computer technique called natural language processing. This allowed them to analyze the meaning of millions of posts without having to read each one manually. They used a tool known as the Universal Sentence Encoder, which converts text into numerical scores representing its semantic meaning.
The team compared the posts made by Discord users to standardized survey questions used by psychologists to measure autonomy, competence, and relatedness. If a userβs posts were mathematically similar to the language of those survey questions, the user received a higher score for that specific need. This method allowed the researchers to estimate the psychological state of each user based on their natural speech patterns.
The researchers also included a control measure to ensure their results were accurate. They compared the user posts to a survey about food neophobia, which is the fear of trying new foods. Since a fear of new foods has nothing to do with extremism, this helped the team account for general patterns in how people use language. This step ensured that the findings were truly about psychological needs rather than just the way people structure their sentences.
To make the study more reliable, the team split their data into two halves. They used the first half to explore their ideas and the second half to confirm that their findings were consistent. This approach helps prevent scientists from finding patterns in data that only appear by chance.
The results showed a clear link between psychological needs and how people behave in these chatrooms. Users whose language reflected high levels of autonomy and competence tended to be much more engaged. They made more posts overall and remained active in the chatrooms for a longer number of days.
Competence was the strongest predictor of how many posts a person would make. This suggests that people who feel effective or capable in these spaces are more likely to contribute to the conversation. Autonomy also played a significant role, as users who felt a sense of agency were more likely to stay involved with the group over time.
A different pattern was observed for the need for relatedness. While there was some evidence that social connection was linked to activity, the results were less consistent than those for autonomy and competence. In some models, relatedness was actually linked to fewer posts, which was a surprising outcome.
The researchers also looked at the use of hate terms as a measure of extremist signaling. They found that users who expressed more autonomy and competence used fewer hate terms in their posts. This suggests that people who feel more personally secure and capable may have less of a need to use aggressive language against others.
On the other hand, a higher need for relatedness was linked to a greater use of hate terms. The researchers suggest that this might be because new members use extreme language to gain acceptance from the group. By adopting the groupβs hateful rhetoric, they may be attempting to prove their loyalty and satisfy their need for belonging.
These findings share similarities with a study published in 2021 in the Journal of Experimental Social Psychology. That previous research, led by Abdo Elnakouri, found that expressing hatred toward large groups or institutions can give people a greater sense of meaning in life. Both studies suggest that extreme attitudes and group participation serve a psychological function for the individual.
The earlier study by Elnakouri found that collective hate can make people feel more energized and determined. It suggests that having a clear enemy to fight against can simplify the world and provide a sense of purpose. The McGill study builds on this by showing how these motivations play out in real world digital interactions over long periods.
But there are some limitations that should be considered. Since the data came from leaked chatroom logs, the researchers could not ask the users for their consent or follow up with them directly. Additionally, the computer models could not always tell if a user was expressing that a need was being met or if they were complaining that it was being frustrated.
The researchers noted that the analysis focused only on text and did not include images, videos, or emojis. These visual elements are common in online extremist culture and might carry additional psychological weight. Future research could look at how visual media contributes to satisfying psychological needs in these spaces.
The study also could not account for βlurkers,β who are people who read the messages but never post anything. It is possible that the psychological needs of these silent observers are different from those who are highly active. Understanding the motivations of this quieter group could be a helpful direction for future investigations.
Despite these limitations, the study provides a new way to think about how people become radicalized. It suggests that instead of focusing only on ideology, it may be helpful to look at the psychological benefits people get from these groups.
The study, βBasic Psychological Needs Are Associated With Engagement and Hate Term Use in Extremist Chatrooms,β was authored by Jeremy J. J. Rappel, David D. Vachon, and Eric Hehman.

An analysis of policy documents from 116 R1 U.S. universities found that 63% of these institutions encourage the use of generative AI, with 41% offering detailed guidance for its use in the classroom. More than half of the institutions discussed the ethics of generative AI use, while the majority of guidance focused on using generative AI for writing activities. The research was published in Computers in Human Behavior: Artificial Humans.
Generative AI is a type of artificial intelligence that creates new content such as text, images, audio, code, or video based on patterns learned from large datasets. It works by using models like neural networks to predict and generate outputs that resemble human-created content.
People use generative AI to write documents, summarize information, create artwork, design products, and automate routine tasks. It also supports scientific research by analyzing data, generating hypotheses, and assisting in code or experiment design. Businesses use it for customer support, marketing, prototyping, and improving productivity across many workflows.
In education, generative AI helps students learn by providing explanations, tutoring, and personalized feedback. In medicine, it assists with interpreting data, drafting reports, and even exploring molecular designs for new drugs. Artists and designers use it to explore creative variations and accelerate their creative process. However, generative AI also raises concerns about misinformation, copyright issues, and ethical use.
Study author Nora McDonald and her colleagues wanted to explore what guidance higher education institutions were providing to their constituents about the use of generative AI, what the overall sentiment was regarding its use, and how that sentiment was manifested in actual guidelines.
They were also interested in ethical and privacy considerations, if represented in the guidelines. These authors note that, although the use of generative AIβprimarily ChatGPTβbecame very popular very quickly after its release, there are voices in education that remain staunchly opposed to the use of such applications.
The study authors collected policy documents and guidelines that were publicly available on the internet from 116 R1 institutions, utilizing the Carnegie Classification framework for classifying colleges and universities in the United States. According to this classification, R1 institutions are universities with the highest level of research activity.
The researchers downloaded documents that specifically dealt with generative AI, resulting in a total of 141 documents. Four researchers reviewed 20 of these documents to create a codebook (a coding system for classifying the documents according to their contents). They then used this system to categorize all the other documents.
Results showed that 56% of institutions provided sample syllabi for faculty that included policies on generative AI use, while 55% gave examples of statements regarding usage permissions, such as βembrace,β βlimit,β or βprohibit.β Fifty percent provided activities that would help instructors integrate and leverage generative AI in their classrooms, while 44% discouraged the use of detection tools meant to catch AI-generated work. Fifty-four percent provided guidance for designing assignments in ways that discourage the use of generative AI by students, and 23% gave guidance on how to use AI detection tools.
Overall, 63% of universities encouraged the use of generative AI, and 41% offered detailed guidance for its use in the classroom. The majority of guidance focused on writing activities; references to code and STEM-related activities were infrequent and often vague, even when mentioned. Fifty-two percent of institutions discussed the ethics of generative AI regarding a broad range of topics.
βBased on our findings we caution that guidance for faculty can become burdensome as policies suggest or imply substantial revisions to existing pedagogical practices,β the study authors concluded.
The study contributes to the scientific understanding of the stances U.S. universities take on generative AI use. However, the results of the study are based on an analysis of policy documents, not on the study of real classroom practices, which might not fully reflect the provisions specified in the policies.
The paper, βGenerative artificial intelligence in higher education: Evidence from an analysis of institutional policies and guidelines,β was authored by Nora McDonald, Aditya Johri, Areej Ali, and Aayushi Hingle Collier.

Why was she buried this way?
A new study published in the Journal of Occupational Health Psychology has found that the phenomenon popularly known as βZoom fatigueβ may have largely dissipated in the post-pandemic work environment. The findings suggest that video meetings are no longer significantly more exhausting than other types of meetings for most employees. This research challenges the narrative that virtual communication is inherently draining and indicates that workers may have adapted to the demands of remote collaboration.
The rapid shift to remote work during the COVID-19 pandemic necessitated a heavy reliance on video conferencing tools to maintain organizational operations. During this period, many employees reported feeling an unusual sense of exhaustion following these virtual interactions. This collective experience was quickly labeled βZoom fatigue.β Previous empirical studies conducted during the height of the pandemic supported these anecdotal claims. They found a correlation between the frequency of video meetings and higher levels of daily fatigue among workers.
Various theories arose to explain why video calls might be uniquely taxing. Some researchers proposed that the cognitive load of video meetings was to blame. This theory posits that users must expend extra mental energy to monitor their own appearance on camera and to interpret non-verbal cues that are harder to read through a screen. Others suggested a theory of βpassive fatigue.β This perspective argues that the lack of physical movement and the under-stimulation of sitting in front of a computer monitor lead to drowsiness and low energy.
However, the context of work has evolved since the early days of the pandemic. For many, video meetings are no longer a forced substitute for all human contact but rather a standard tool for business communication. The researchers behind the current study sought to determine if the exhaustion associated with video calls was a permanent feature of the technology or a temporary symptom of the pandemic era. They aimed to update the scientific understanding of virtual work by replicating a 2022 study in the current year, 2024.
βWe conducted this study from both pure research curiosity, and a practical lens. As our first paper from the pandemic times (Nesher Shoshan & Werht, 2022) in which we identified that βZoom fatigueβ exist got a lot of attention, we were interested to know if the results can be replicated in a different, post-pandemic setting, and with a stronger empirical approach (larger sample, another measurement point, a more sophisticated analysis),β said Hadar Nesher Shoshan, a junior professor at Johannes Gutenberg University Mainz.
βPractically, we found out that our first study is being used to make organizational decisions. This is a large responsibility, that we wanted to make sure is updated and evidence based.β
To investigate this, the researchers utilized an experience sampling method. This approach allows researchers to capture data from participants in real-time as they go about their daily lives, rather than relying on retrospective surveys that can be subject to memory errors. The study was conducted in Germany in April 2024.
The research team recruited 125 participants who worked at least 20 hours per week and regularly attended video meetings. The participants represented various industries, including communication, service, and health sectors. Over a period of ten working days, these individuals completed short surveys at four specific times each day. This rigorous schedule resulted in a dataset covering 590 workdays and 945 distinct meetings.
In each survey, participants reported details about the last work meeting they had attended. They specified the medium of the meeting, such as whether it was held via video, telephone, face-to-face, or through written chat. They also rated their current levels of emotional exhaustion and βpassive fatigue,β which was defined as feelings of sleepiness or lack of alertness.
The researchers also collected data on several potential moderating factors. They asked participants to rate their own level of active participation in the meeting, as well as the participation level of the group. They inquired about multitasking behaviors during the call. Additionally, they recorded objective characteristics of the meetings, such as the duration in minutes and the number of attendees.
The analysis of this extensive dataset revealed that video meetings were not related to higher levels of exhaustion compared to non-video meetings. Participants did not report feeling more drained or more drowsy after a video call than they did after a face-to-face meeting or a phone call. This finding held true even when the researchers statistically controlled for the level of exhaustion participants felt before the meeting began.
The researchers also examined whether working from home influenced these results. The analysis showed that the location of the worker did not moderate the relationship between video meetings and fatigue. This suggests that the environment of the home office is not a primary driver of the exhaustion previously associated with video calls.
βOur initial hypothesis was that zoom fatigue still existed. After all, all previous studies had come to this conclusion, so there was no reason to doubt that this result was correct,β said Nesher Shoshan. βHowever, we found no evidence of the phenomenon! According to our findings, online meetings are not more fatiguing than in-person meetings.β
Regarding the specific behaviors within meetings, the researchers found that active participation and multitasking did not significantly alter the fatigue levels associated with video meetings. Whether an individual spoke frequently or remained quiet did not change the likelihood of experiencing exhaustion. Similarly, checking emails or performing other tasks during the meeting did not appear to increase the mental load enough to cause significant fatigue.
The study did identify one specific factor that made a difference: the duration of the meeting. The results indicated that video meetings lasting less than 44 minutes were actually less exhausting than meetings held through other media. This suggests there is a βsweet spotβ for virtual collaboration where the efficiency of the format outweighs its cognitive costs. However, once a video meeting exceeded this time frame, the advantage disappeared, and fatigue levels became comparable to other meeting types.
Another finding involved the role of boredom. The researchers observed that when participants rated a video meeting as boring, it was associated with slightly higher levels of exhaustion compared to boring meetings held in other formats. This lends some support to the idea that under-stimulation can be a negative factor in virtual environments, even if it does not lead to general βZoom fatigue.β
The researchers propose several explanations for why their results differ from pandemic-era studies. They suggest that the βZoom fatigueβ observed in 2020 and 2021 may have been largely driven by the historical context. During the lockdowns, video meetings carried a symbolic meaning. They represented isolation, the loss of office camaraderie, and the stress of a global health crisis. In 2024, this symbolic weight has likely faded. Video calls have become a normalized part of the workday.
Additionally, it is plausible that workers have simply habituated to the format. Over the last few years, employees may have developed unconscious strategies to manage the cognitive demands of being on camera. They may be more comfortable with the technology and less self-conscious about their appearance on screen.
These findings have practical implications for organizational policy. As many companies push for return-to-office mandates, they often cite the limitations of virtual work as a justification. This study suggests that employee exhaustion is not a valid reason to discourage remote work or video meetings. Instead, the data indicates that virtual meetings can be an efficient and non-taxing way to collaborate, provided they are managed well. The results specifically point to the benefit of keeping video meetings relatively short to maximize employee well-being.
The study has some limitations that should be considered. The data relied on self-reports, which capture the participantβs subjective experience but do not provide objective physiological measurements of stress. The study also focused on the German workforce, and cultural attitudes toward work and technology could vary in other regions. Furthermore, the study design allows for the observation of correlations but cannot definitively prove that the change in time period caused the disappearance of Zoom fatigue.
Future research could benefit from incorporating objective measures of fatigue, such as heart rate variability or cortisol levels. It would also be useful to investigate the content and quality of interactions within meetings. It is possible that negative interactions, such as conflicts or misunderstandings, drive exhaustion regardless of the communication medium. Finally, researchers might explore the positive potential of video meetings, investigating how they can be designed to promote engagement and flow rather than just avoiding fatigue.
βWe hope that the average person takes from our study the importance of critical thinking, not take older results as truth and always ask questions,β Nesher Shoshan told PsyPost. βFor researchers, we want to emphasize the importance of transparency and replication. Finally, for organizations, we stand for flexible work arrangements and hybrid work that are shown to be effective in many other studies, and according to our study, do not come with a fatiguing price.β
The study, ββZoom Fatigueβ Revisited: Are Video Meetings Still Exhausting Post-COVID-19?,β was authored by Hadar Nesher Shoshan and Wilken Wehrt.

We have to rethink this.


New research sheds light on why some individuals choose to remain in romantic relationships characterized by high levels of conflict. The study, published in the Journal of Applied Social Psychology, suggests that benevolent sexism and anxious attachment styles may lead people to base their self-worth on their relationship status, prompting them to utilize maladaptive strategies to maintain the partnership.
Romantic relationships are a fundamental component of daily life for many adults and are strongly linked to psychological well-being and physical health. Despite the benefits of healthy partnerships, many people find themselves unable or unwilling to exit relationships that are unfulfilling or fraught with frequent arguments. Psychological scientists have sought to understand the specific mechanisms that motivate people to maintain troubled relationships rather than ending them.
The new study, spearheaded by Carrie Underwood, focused specifically on the role of benevolent sexism in this dynamic. Benevolent sexism is a subtle form of sexism that subjectively views women positively but frames them as fragile and in need of menβs protection and financial support. The researchers aimed to determine if having a partner who endorses these views makes a person more likely to stay in a troubled union.
βSome people find it difficult to leave romantic relationships that are characterized by high levels of conflict. This is concerning given that romantic relationships are a central part of daily life for many individuals,β explained corresponding author Rachael Robnett, the director of the Womenβs Research Institute of Nevada and professor at the University of Nevada, Las Vegas.
βWe were particularly interested in whether people are more inclined to stay in conflicted relationships when their romantic partner is described as endorsing benevolent sexism, which is a subtle form of sexism that emphasizes interdependence and separate roles for women and men in heterosexual romantic relationships.β
βFor example, benevolent sexism encourages men to protect and provide for women under the assumption that women are not well equipped to do these things themselves. Correspondingly, benevolent sexism also emphasizes that womenβs most important role is to care for their husband and children in the home.β
The researchers conducted two studies. The first involved 158 heterosexual undergraduate women recruited from a large public university in the Western United States. The participants ranged in age from 18 to 55, with an average age of approximately 20 years. The sample was racially diverse, with the largest groups identifying as Latina and European American.
The researchers utilized an experimental design involving a hypothetical vignette. Participants were randomly assigned to read one of two scenarios describing a couple, Anthony and Chloe, engaging in a heated argument. In the control condition, participants simply read about the argument.
In the experimental condition, participants read an additional description of Anthony that portrayed him as endorsing benevolent sexism. This description characterized him as a provider who believes women should be cherished, protected, and placed on a pedestal by men. Participants were instructed to imagine they were the woman in the relationship and to report how they would respond to the situation.
After reading the scenario, the women reported how likely they would be to use various relationship maintenance strategies. These included positive strategies, such as emphasizing their commitment to the partner, and negative strategies, such as flirting with others to make the partner jealous. They also rated their likelihood of dissolving the relationship.
Finally, participants completed surveys measuring their own levels of benevolent sexism and relationship-contingent self-esteem. Relationship-contingent self-esteem measures the extent to which a personβs feelings of self-worth are dependent on the success of their romantic relationship.
The researchers found distinct differences in anticipated behavior based on the description of the male partner. When the male partner was described as endorsing benevolent sexism, women were more likely to endorse using positive relationship maintenance strategies than they were to end the relationship. This preference for maintaining the relationship via prosocial means was not observed in the control condition.
The researchers also analyzed how the participantsβ own attitudes influenced their anticipated behaviors. Women who scored higher on measures of benevolent sexism tended to report higher levels of relationship-contingent self-esteem. In turn, higher relationship-contingent self-esteem was associated with a greater willingness to use negative maintenance strategies.
This statistical pathway suggests that benevolent sexism may encourage women to invest their self-worth heavily in their relationships. Consequently, when those relationships are troubled, these women may resort to maladaptive coping behaviors, such as jealousy induction, to restore the bond.
βWhen we asked women to envision themselves in a relationship that was characterized by a high level of conflict, they reported a desire to remain in the relationship and resolve the conflict via prosocial strategies when the man in the relationship espoused ideals that are in line with benevolent sexism,β Robnett told PsyPost.
βWe did not see the same pattern in a control condition in which the manβs gender attitudes were not described. This illustrates the insidious nature of benevolent sexism: Its superficially positive veneer may entice some women to tolerate relationships that do not serve their best interests.β
The second study built upon these findings by including both women and men and by incorporating attachment theory. The sample consisted of 190 heterosexual undergraduate students, with a majority being women. The average age was roughly 20 years, and the participants were recruited from the same university participant pool.
Similar to the first study, participants read the vignette about the couple in a heated argument. However, in this study, all participants were assigned to the βbenevolent partnerβ condition. Women read the description of Anthony used in the first study. Men read a description of Chloe, who was portrayed as believing women should be domestic caretakers who rely on men for fulfillment.
Participants completed the same measures regarding relationship maintenance and self-esteem used in the previous study. Additionally, they completed the Experiences in Close Relationships-Revised questionnaire to assess anxious and avoidant attachment styles. Anxious attachment involves a fear of rejection and a strong desire for intimacy, while avoidant attachment involves discomfort with closeness.
The results indicated that the psychological mechanisms functioned similarly for both women and men. The researchers found that participants with higher levels of anxious attachment were more likely to base their self-esteem on their relationship. This heightened relationship-contingent self-esteem then predicted a greater likelihood of using negative relationship maintenance strategies.
The analysis provided evidence that relationship-contingent self-esteem mediates the link between anxious attachment and maladaptive relationship behaviors. This means that anxiously attached individuals may engage in negative behaviors not just because they are anxious, but because their self-worth is on the line.
The study also reinforced the connection between benevolent sexism and self-worth found in the first experiment. Higher levels of benevolent sexism predicted higher relationship-contingent self-esteem for both men and women. Conversely, participants with higher levels of avoidant attachment were less likely to base their self-worth on the relationship.
βWomen and men who were high in relationship-contingent self-esteem were particularly likely to report that they would remain in the relationship and attempt to resolve the conflict via maladaptive strategies such as making their partner jealous,β Robnett explained. βRelationship-contingent self-esteem occurs when someoneβs sense of self is highly invested in their romantic relationship, such that their self-esteem suffers if the relationship ends. Our findings suggest that relationship-contingent self-esteem may encourage people to (a) remain in troubled relationships and (b) cope with their dissatisfaction by engaging in maladaptive behaviors.β
βOur findings further illustrated that relationship-contingent self-esteem tends to be particularly high in women and men who are high in benevolent sexism and high in anxious attachment. In theory, this is because both of these constructs encourage people to be hyper-focused on their romantic relationships.β
βIn sum, our findings suggest a possible chain of events where anxious attachment and benevolent sexism encourage people to invest their sense of self in romantic relationships,β Robnett said. βIn turn, this may contribute to them staying in conflicted romantic relationships and attempting to resolve the conflict via maladaptive strategies.β
But the study, like all research, includes some limitations. Both studies relied on hypothetical vignettes rather than observing actual behavior in real-time conflicts. How people anticipate they will react to a scenario may differ from how they react in a real-world situation with an actual partner.
Additionally, the sample was comprised of undergraduate students, which may limit how well the findings apply to older adults or long-term married couples. The researchers also pointed out that the study design was cross-sectional, which prevents definitive conclusions about cause and effect.
βWe can only speculate about causal flow in this chain of events,β Robnett explained. βWe would need an experiment or longitudinal data to draw stronger conclusions.β
The study, βBenevolent Sexism, Attachment Style, and Contingent SelfβEsteem Help to Explain How People Anticipate Responding to a Troubled Romantic Relationship,β was authored by Carrie R. Underwood and Rachael D. Robnett.

Spending the morning hours in dim indoor lighting may cause healthy individuals to exhibit biological changes typically seen in people with depression. A study published in the Journal of Psychiatric Research indicates that a lack of bright light before noon can disrupt sleep cycles and hormonal rhythms. These physiological shifts suggest that dimly lit environments could increase a personβs vulnerability to mood disorders.
The human body relies on environmental cues to regulate its internal clock. This system is known as the circadian rhythm. It dictates when we feel alert and when we feel ready for sleep. The most powerful of these cues is light. When sunlight enters the eye, it signals a region of the brain called the suprachiasmatic nucleus. This brain region then coordinates hormone production and body temperature. In a natural setting, humans would experience bright light in the morning and darkness at night.
Modern life has altered this natural pattern. Many people spend the vast majority of their waking hours inside buildings. The artificial light in these spaces is often far less intense than natural daylight.
Jan de Zeeuw, Dieter Kunz, and their colleagues at St. Hedwig Hospital and CharitΓ©βUniversitΓ€tsmedizin Berlin have spent years investigating this phenomenon. They describe this lifestyle as βLiving in Biological Darkness.β Their previous research found that urban residents spend approximately half of their daytime hours in light levels lower than 25 lux. For comparison, a cloudy day outside might measure over 1,000 lux.
The researchers wanted to understand the specific consequences of this low-light lifestyle. They were particularly interested in how it affects the hypothalamic-pituitary-adrenal axis. This system controls the release of cortisol. Cortisol is often called the stress hormone. In a healthy person, cortisol levels peak early in the morning to help wake the body. These levels then gradually decline throughout the day and reach their lowest point in the evening. This rhythm allows the body to wind down for sleep.
In patients diagnosed with depression, this rhythm often malfunctions. Their cortisol levels frequently remain elevated throughout the day and into the evening. Another biological marker of depression involves specific changes in sleep architecture. Sleep is composed of different stages, including rapid eye movement, or REM, and deep slow-wave sleep.
Depressed patients often experience a shift in deep sleep from the beginning of the night to later cycles. The researchers aimed to see if dim light alone could induce these depression-like symptoms in healthy volunteers.
The study recruited twenty healthy young adults to participate in a controlled experiment. The group consisted of ten men and ten women with an average age of about twenty-four. To ensure accuracy, the participants maintained a consistent sleep schedule for a week before the testing began. The researchers monitored their adherence using wrist-worn activity trackers.
The participants were randomly divided into two groups. The experiment focused on the morning hours between 8:00 AM and 12:00 PM. For five days, one group spent these hours in a room with low-intensity incandescent lighting. This light measured 55 lux and had a warm, yellowish color temperature. This environment simulated a dimly lit living room or a workspace with poor lighting.
The second group spent the same morning hours in a room with higher-intensity fluorescent lighting. This light measured 800 lux and had a cooler, bluish tone. This intensity mimics a brightly lit office or classroom. It served as a control condition. During the afternoons and evenings, participants left the laboratory and went about their normal lives. They returned to the lab for specific testing sessions.
The research team used several methods to track biological changes. They collected urine and saliva samples to measure hormone concentrations. They focused on cortisol and melatonin. They also utilized polysomnography to record sleep patterns. This involves placing sensors on the head to measure brain waves during the night. The team also assessed the participantsβ mood and reaction times using standard psychological tests.
The findings revealed distinct differences between the two groups. The participants exposed to the dim incandescent light showed a disruption in their cortisol rhythms. Their cortisol levels were elevated in the late afternoon and evening. This elevation occurred at a time when the hormone should ideally be decreasing. The statistical analysis showed that this increase was not a random fluctuation. The result mirrors the blunted circadian rhythm often observed in depressive illnesses.
Sleep patterns in the dim light group also deteriorated. After repetitive exposure to low morning light, these individuals slept for a shorter duration. On average, their total sleep time decreased by about twenty-five minutes. The internal structure of their sleep changed as well. Deep sleep is characterized by slow-wave activity in the brain. Typically, the bulk of this restorative sleep occurs in the first few cycles of the night.
In the dim light group, this slow-wave activity shifted. It decreased in the earlier part of the night and appeared more frequently in later sleep cycles. This delay in deep sleep is a known characteristic of sleep architecture in patients with depression. The participants in this group also reported feeling subjectively worse. They rated themselves as sleepier and sadder after days of low light exposure compared to the bright light group.
The group exposed to the brighter fluorescent light did not show these negative markers. Their cortisol levels followed a more standard daily curve. Their deep sleep remained anchored in the early part of the night. The researchers did note one specific change in this group. The bright light appeared to increase the amount of REM sleep they experienced toward the end of the night.
The study suggests that light intensity affects more than just vision. It serves as a biological signal that keeps the bodyβs systems synchronized. The βmaster clockβ in the brain requires sufficient light input to function correctly. This input comes largely from specialized cells in the retina that are sensitive to blue light. Incandescent bulbs, like those used for the dim group, emit very little blue light. Fluorescent bulbs emit more of these wavelengths.
When the brain does not receive a strong morning light signal, the circadian system may weaken. This weakening can lead to a misalignment of internal rhythms. The researchers note that the suprachiasmatic nucleus has direct neural pathways to the adrenal glands. This connection explains how lightβor the lack of itβcan directly influence cortisol production.
The authors propose that the observed changes could represent a βvulnerabilityβ to depression. The participants were healthy and did not develop clinical depression during the short study. However, their bodies began to mimic the physiological state of a depressed person. The combination of high evening cortisol and disrupted sleep creates a physical environment where mood disorders might more easily take root.
The researchers stated, βIn healthy subjects repetitive exposure to low-intensity lighting during pre-midday hours was associated with increased cortisol levels over the day and delayed slow-wave-activity within nighttime sleep, changes known to occur in patients with depressive illnesses.β
They continued by noting the implications of these sleep changes. βInsomnia-like changes in sleep architecture shown here may pave the avenue to more vulnerability to depression and contribute to the understanding of pathophysiology in depressive illnesses.β
There are limitations to this study that should be considered. The sample size was relatively small, with only ten people in each group. A larger pool of participants would provide more robust data. The design compared two different groups of people rather than testing the same people under both conditions. This introduces the possibility that individual differences influenced the results.
Additionally, the researchers could not control the light exposure participants received after leaving the lab at noon. While they wore activity monitors, these devices cannot always perfectly track light intake. However, previous studies by the same team suggest that urban residents generally encounter low light levels throughout the day. It is plausible that the participants did not receive significant bright light in the afternoons to counteract the morning dimness.
Future research should investigate these effects over longer periods. A study lasting weeks or months could determine if these biological changes eventually lead to psychological symptoms. It would also be beneficial to test different light sources, such as LED lighting, which is now common. Understanding the specific wavelengths of light that best support the circadian rhythm is an ongoing area of scientific inquiry.
The findings carry practical implications for building design and public health. They suggest that the standard lighting found in many homes and offices may be insufficient for biological health. Increasing light levels during the morning could serve as a simple preventative measure. This might involve using brighter artificial lights or designing spaces that admit more daylight.
The concept of βLiving in Biological Darknessβ highlights a mismatch between human biology and the modern environment. Our bodies evolved to expect bright mornings. Depriving the brain of this signal appears to set off a chain reaction of hormonal and neurological disruptions. While a few days of dim light may not cause immediate harm, chronic exposure could erode mental resilience.
Jan de Zeeuw and his co-authors argue that it is time to reconsider how we light our indoor spaces. They suggest that integrating bright light into schools, workplaces, and nursing homes could improve overall health. By mimicking the natural rising of the sun, we may be able to stabilize our internal rhythms. This stabilization could protect against the physiological precursors of depression.
The study, βLiving in biological darkness III: Effects of low-level pre-midday lighting on markers of depression in healthy subjects,β was authored by Jan de Zeeuw, Claudia Nowozin, Martin Haberecht, Sven HΓ€del, Frederik Bes, and Dieter Kunz.

And you can view it now.
The values that shape our lives.
Recent experimental findings suggest that d-amphetamine, a potent central nervous system stimulant, can override learned sexual inhibitions in male rats. The research demonstrates that the drug causes animals to pursue sexual partners they had previously learned to avoid due to negative reinforcement. These results, which highlight a disruption in the brainβs reward and inhibition circuitry, were published in the journal Psychopharmacology.
To understand the specific nature of this study, one must first look at how animals learn to navigate sexual environments. In the wild, animals must determine when it is appropriate to engage in mating behavior and when it is not. A male rat that attempts to mate with a female that is not sexually receptive will be rejected.
Over time, the animal learns to associate certain cues, such as scents or locations, with this rejection. This learning process is known as conditioned sexual inhibition. It serves an evolutionary purpose by preventing the male from wasting energy on mating attempts that will not result in reproduction.
Researchers have long sought to understand how recreational drugs alter this specific type of decision-making. While it is well documented that stimulants can physically enable or enhance sexual behavior, less is understood about how they affect the psychological choice to engage in sex when an individual knows they should not. Previous work has established that alcohol can dismantle this learned inhibition. The current research aimed to see if d-amphetamine, a drug with a very different chemical mechanism, would produce a similar result.
The research team was led by Katuschia GermΓ© from the Centre for Studies in Behavioral Neurobiology at Concordia University in Montreal. The team also included Dhillon Persad, Justine Petit-Robinson, Shimon Amir, and James G. Pfaus. They designed an experiment to create a strong mental association in the subjects. They used male Long-Evans rats as the subjects for the experiment.
The researchers began by training the rats over the course of twenty sessions. This training took place in specific testing chambers. During these sessions, the males were exposed to two different types of female rats. Some females were sexually receptive and carried no added scent. Other females were not sexually receptive and were scented with an almond extract.
The male rats quickly learned the difference. They associated the neutral, unscented females with sexual reward. Conversely, they associated the almond scent with rejection and a lack of reward. After the training phase, the males would reliably ignore females that smelled like almond, even if those females were actually receptive. The almond smell had become a βstopβ signal. This state represents the conditioned sexual inhibition that the study sought to investigate.
Once this inhibition was established, the researchers moved to the testing phase. They divided the rats into groups and administered varying doses of d-amphetamine. Some rats received a saline solution which served as a control group with no drug effect. Others received doses of 0.5, 1.0, or 2.0 milligrams per kilogram of body weight.
The researchers then placed the male rats in a large open arena. This environment was different from the training cages to ensure the rats were reacting to the females and not the room itself. Two sexually receptive females were placed in the arena with the male. One female was unscented. The other female was scented with the almond extract.
Under normal circumstances, a trained rat would ignore the almond-scented female. This is exactly what the researchers observed in the group given the saline solution. These sober rats directed their attention almost exclusively toward the unscented female. They adhered to their training and avoided the scent associated with past rejection.
The behavior of the rats treated with d-amphetamine was distinct. Regardless of the dose administered, the drug-treated rats copulated with both the unscented and the almond-scented females. The drug had completely eroded the learned inhibition. The almond scent, which previously acted as a deterrent, no longer stopped the males from initiating copulation.
It is important to note that the drug did not simply make the rats hyperactive or indiscriminate due to confusion. The researchers tracked the total amount of sexual activity. They found that while the choice of partner changed, the overall mechanics of the sexual behavior remained competent. The drug did not create a chaotic frenzy. It specifically removed the psychological barrier that had been built during training.
Following the behavioral tests, the researchers investigated what was happening inside the brains of these animals. They utilized a technique that stains for the Fos protein. This protein is produced within neurons shortly after they have been active. By counting the cells containing Fos, scientists can create a map of which brain regions were working during a specific event.
To do this, the researchers re-exposed the rats to the almond odor while they were under the influence of the drug or saline. They did not include females in this phase. This allowed the team to see how the brain processed the cue of the almond scent in isolation.
The analysis revealed distinct patterns of brain activation. In the rats that received saline, the almond odor triggered activity in the piriform cortex. This is a region of the brain involved in processing the sense of smell. However, these sober rats showed lower activity in the medial preoptic area. This area is critical for male sexual behavior. This pattern suggests that the sober brain registered the smell and dampened the sexual control center in response.
The rats treated with d-amphetamine showed a reversal of this pattern. When exposed to the almond scent, these rats displayed increased activity in the nucleus accumbens. The nucleus accumbens is a central component of the brainβs reward system. It is heavily involved in processing motivation and pleasure.
The drug also increased activity in the ventral tegmental area. This region produces dopamine and sends it to the nucleus accumbens. The presence of the drug appeared to hijack the processing of the inhibitory cue. Instead of the almond smell triggering a βstopβ signal, the drug caused the brain to treat the smell as a neutral or potentially positive stimulus.
The researchers noted that the activation in the nucleus accumbens was particularly telling. This region lights up in response to rewards. By chemically stimulating this area with d-amphetamine, the drug may have overridden the negative memory associated with the almond scent. The cue for rejection was seemingly transformed into a cue for potential reward.
The team also observed changes in the amygdala. This part of the brain is often associated with emotional processing and fear. The drug-treated rats showed different activity levels in the central and basolateral nuclei of the amygdala compared to the control group. This suggests that the drug alters the emotional weight of the memory.
These findings align with previous research conducted by this laboratory regarding alcohol. In prior studies, the researchers found that alcohol also disrupted conditioned sexual inhibition. The fact that two very different drugsβone a depressant and one a stimulantβproduce the same behavioral outcome suggests they may act on a shared neural pathway.
The authors propose that this shared pathway likely involves the mesolimbic dopamine system. This is the circuit connecting the ventral tegmental area to the nucleus accumbens. Both alcohol and amphetamines are known to increase dopamine release in this system. This surge in dopamine appears to be strong enough to wash out the learned signals that tell an individual to stop or refrain from a behavior.
There are limitations to how these findings can be interpreted. The study was conducted on rats, and animal models do not perfectly replicate human psychology. The complexity of human sexual decision-making involves social and cultural factors that cannot be simulated in a rodent model. Additionally, the study looked at acute administration of the drug. The effects of chronic, long-term use might result in different behavioral adaptations.
The researchers also point out that while the inhibition was broken, the drug did not strictly enhance sexual performance. In fact, at the highest doses, some rats failed to reach ejaculation despite engaging in the behavior. This distinction separates the concept of sexual arousal from sexual execution. The drug increased the drive to engage but did not necessarily improve the physical conclusion of the act.
Future research will likely focus on pinpointing the exact chemical interactions within the amygdala and nucleus accumbens. Understanding the precise receptors involved could shed light on how addiction affects risk assessment. If a drug can chemically overwrite a learned warning signal, it explains why individuals under the influence often engage in risky behaviors they would logically avoid when sober.
The study provides a neurobiological framework for understanding drug-induced disinhibition. It suggests that drugs like d-amphetamine do not merely lower inhibitions in a vague sense. Rather, they actively reconfigure how the brain perceives specific cues. A stimulus that once meant βdangerβ or βrejectionβ is reprocessed through the reward system. This chemical deception allows the behavior to proceed unchecked.
The study, βDisruptive effects of d-amphetamine on conditioned sexual inhibition in the male rat,β was authored by Katuschia GermΓ©, Dhillon Persad, Justine Petit-Robinson, Shimon Amir, and James G. Pfaus.

The integration of artificial intelligence into mental health care has accelerated rapidly, with more than half of psychologists now utilizing these tools to assist with their daily professional duties. While practitioners are increasingly adopting this technology to manage administrative burdens, they remain highly cautious regarding the potential threats it poses to patient privacy and safety, according to the American Psychological Associationβs 2025 Practitioner Pulse Survey.
The American Psychological Association represents the largest scientific and professional organization of psychologists in the United States. Its leadership monitors the evolving landscape of mental health practice to understand how professionals navigate changes in technology and patient needs.
In recent years, the field has faced a dual challenge of high demand for services and increasing bureaucratic requirements from insurance providers. These pressures have created an environment where digital tools promise relief from time-consuming paperwork.
However, the introduction of automated systems into sensitive therapeutic environments raises ethical questions regarding confidentiality and the human element of care. To gauge how these tensions are playing out in real-world offices, the association commissioned its annual inquiry into the state of the profession.
The 2025 Practitioner Pulse Survey targeted doctoral-level psychologists who held active licenses to practice in at least one U.S. state. To ensure the results accurately reflected the profession, the research team utilized a probability-based random sampling method. They generated a list of more than 126,000 licensed psychologists using state board data and randomly selected 30,000 individuals to receive invitations.
This approach allowed the researchers to minimize selection bias. Ultimately, 1,742 psychologists completed the survey, providing a snapshot of the workforce. The respondents were primarily female and White, which aligns with historical demographic trends in the field. The majority worked full-time, with private practice being the most common setting.
The survey results revealed a sharp increase in the adoption of artificial intelligence compared to the previous year. In 2024, only 29% of psychologists reported using AI tools. By 2025, that figure had climbed to 56%. The frequency of use also intensified. Nearly three out of 10 psychologists reported using these tools on at least a monthly basis. This represents a substantial shift from 2024, when only about one in 10 reported such frequent usage.
Detailed analysis of the data shows that psychologists are primarily using these tools to handle logistics rather than patient care. Among those who utilized AI, more than half used it to assist with writing emails and other materials. About one-third used it to generate content or summarize clinical notes. These functions address the administrative workload that often detracts from face-to-face time with clients.
Arthur C. Evans Jr., PhD, the CEO of the association, commented on this trend.
βPsychologists are drawn to this field because theyβre passionate about improving peoplesβ lives, but they can lose hours each day on paperwork and managing the often byzantine requirements of insurance companies,β said Evans. βLeveraging safe and ethical AI tools can increase psychologistsβ efficiency, allowing them to reach more people and better serve them.β
Despite the utility of these tools for office management, the survey highlighted deep reservations about their safety. An overwhelming 92% of psychologists cited concerns regarding the use of AI in their field. The most prevalent worry, cited by 67% of respondents, was the potential for data breaches. This is a particularly acute issue in mental health care, where maintaining the confidentiality of patient disclosures is foundational to the therapeutic relationship.
Other concerns focused on the reliability and social impact of the technology. Unanticipated social harms were cited by 64% of respondents. Biases in the input and output of AI models worried 63% of the psychologists surveyed. There is a documented risk that AI models trained on unrepresentative data may perpetuate stereotypes or offer unequal quality of care to marginalized groups.
Additionally, 60% of practitioners expressed concern over inaccurate output or βhallucinations.β This term refers to the tendency of generative AI models to confidently present false or fabricated information as fact. In a clinical setting, such errors could lead to misdiagnosis or inappropriate treatment plans if not caught by a human supervisor.
βArtificial intelligence can help ease some of the pressures that psychologists are facingβfor instance, by increasing efficiency and improving access to careβbut human oversight remains essential,β said Evans. βPatients need to know they can trust their provider to identify and mitigate risks or biases that arise from using these technologies in their treatment.β
The survey data suggests that psychologists are heeding this need for oversight by keeping AI largely separate from direct clinical tasks. Only 8% of those who used the technology employed it to assist with clinical diagnosis. Furthermore, only 5% utilized chatbot assistance for direct patient interaction. This indicates that while practitioners are willing to delegate paperwork to algorithms, they are hesitant to trust them with the nuances of human psychology.
This hesitation correlates with fears about the future of the profession. The survey found that 38% of psychologists worried that AI might eventually make some of their job duties obsolete. However, the current low rates of clinical adoption suggest that the core functions of therapy remain firmly in human hands for the time being.
The context for this technological shift is a workforce that remains under immense pressure. The survey explored factors beyond technology, painting a picture of a profession straining to meet demand. Nearly half of all psychologists reported that they had no openings for new patients.
Simultaneously, practitioners observed that the mental health crisis has not abated. About 45% of respondents indicated that the severity of their patientsβ symptoms is increasing. This rising acuity requires more intensive care and energy from providers, further limiting the number of patients they can effectively treat.
Economic factors also complicate the landscape. The survey revealed that fewer than two-thirds of psychologists accept some form of insurance. Respondents pointed to insufficient reimbursement rates as a primary driver for this decision. They also cited struggles with pre-authorization requirements and audits. These administrative hurdles consume time that could otherwise be spent on treatment.
The association has issued recommendations for psychologists considering the use of AI to ensure ethical practice. They advise obtaining informed consent from patients by clearly communicating how AI tools are used. Practitioners are encouraged to evaluate tools for potential biases that could worsen health disparities.
Compliance with data privacy laws is another priority. The recommendations urge psychologists to understand exactly how patient data is used, stored, or shared by the third-party companies that provide AI services. This due diligence is intended to protect the sanctity of the doctor-patient privilege in a digital age.
The methodology of the 2025 survey differed slightly from previous years to improve accuracy. In prior iterations, the survey screened out ineligible participants. In 2025, the instrument included a section for those who did not meet the criteria, allowing the organization to gather internal data on who was receiving the invites.
The response rate for the survey was 6.6%. While this may appear low to a layperson, it is a typical rate for this type of professional survey and provided a robust sample size for analysis. The demographic breakdown of the sample showed slight shifts toward a younger workforce. The 2025 sample had the highest proportion of early-career practitioners in the history of the survey.
This influx of younger psychologists may influence the adoption rates of new technologies. Early-career professionals are often more accustomed to integrating digital solutions into their workflows. However, the high levels of concern across the board suggest that skepticism of AI is not limited to older generations of practitioners.
The findings from the 2025 Practitioner Pulse Survey illustrate a profession at a crossroads. Psychologists are actively seeking ways to manage an unsustainable workload. AI offers a potential solution to the administrative bottleneck. Yet, the ethical mandates of the profession demand a cautious approach.
The data indicates that while the tools are entering the office, they have not yet entered the therapy room in a meaningful way. Practitioners are balancing the need for efficiency with the imperative to do no harm. As the technology evolves, the field will likely continue to grapple with how to harness the benefits of automation without compromising the human connection that defines psychological care.

SpongeBall DeathPants.
A new study suggests that the brain uses distinct neural pathways to process different aspects of personal well-being. The research indicates that evaluating family relationships activates specific memory-related brain regions, while assessing how one handles stress engages areas responsible for cognitive control. These findings were published recently in the journal Emotion.
Psychologists and neuroscientists have struggled to define exactly what constitutes a sense of well-being. Historically, many experts viewed well-being as a single, general concept. It was often equated simply with happiness or life satisfaction. This approach assumes that feeling good about life is a uniform experience. However, more recent scholarship argues that well-being is multidimensional. It is likely composed of various distinct facets that contribute to overall mental health.
To understand how we can improve mental health, it is necessary to identify the mechanisms behind these different components. A team of researchers set out to map the brain activity associated with specific types of life satisfaction. The study was conducted by Kayla H. Green, Suzanne van de Groep, Renske van der Cruijsen, Esther A. H. Warnert, and Eveline A. Crone. These scientists are affiliated with Erasmus University Rotterdam and Radboud University in the Netherlands.
The researchers based their work on the idea that young adults face unique challenges in the modern world. They utilized a measurement tool called the Multidimensional Well-being in Youth Scale. This scale was previously developed in collaboration with panels of young people. It divides well-being into five specific domains.
The first domain is family relationships. The second is the ability to deal with stress. The third domain covers self-confidence. The fourth involves having impact, purpose, and meaning in life. The final domain is the feeling of being loved, appreciated, and respected. The researchers hypothesized that the brain would respond differently depending on which of these domains a person was considering.
To test this hypothesis, the team recruited 34 young adults. The participants ranged in age from 20 to 25 years old. This age group is often referred to as emerging adulthood. It is a period characterized by identity exploration and significant life changes. The researchers used functional magnetic resonance imaging, or fMRI, to observe brain activity. This technology tracks blood flow to different parts of the brain to determine which areas are working hardest at any given moment.
While inside the MRI scanner, the participants completed a specific self-evaluation task. They viewed a series of sentences related to the five domains of well-being. For example, a statement might ask them to evaluate if they accept themselves for who they are. The participants rated how much the statement applied to them on a scale of one to four.
The task did not stop at a simple evaluation of the present. After rating their current feelings, the participants answered a follow-up question. They rated the extent to which they wanted that specific aspect of their life to change in the future. This allowed the researchers to measure both current satisfaction and the desire for personal growth.
In addition to the brain scans, the participants completed standardized surveys outside of the scanner. One survey measured symptoms of depression. Another survey assessed symptoms of burnout. The researchers also asked about feelings of uncertainty regarding the future. These measures helped the team connect the immediate brain responses to the participantsβ broader mental health.
The behavioral results from the study showed clear patterns in how young adults view their lives. The participants gave the lowest positivity ratings to the domain of dealing with stress. This suggests that managing stress is a primary struggle for this demographic. Consequently, the participants reported the highest desire for future change in this same domain.
The researchers analyzed the relationship between these ratings and the mental health surveys. They found that higher positivity ratings in all five domains were associated with fewer burnout symptoms. This means that feeling good about any area of life may offer some protection against burnout.
A different pattern emerged regarding the desire for change. Participants who reported more burnout symptoms expressed a stronger desire to change how they felt about having an impact. They also wanted to change their levels of self-confidence and their feelings of being loved. This suggests that burnout is not just about exhaustion. It is also linked to a desire to alter oneβs sense of purpose and social connection.
Depressive symptoms showed a broad association with the desire for change. Higher levels of depression were linked to a wish for future changes in almost every domain. The only exception was self-confidence. This implies that young adults with depressive symptoms are generally unsatisfied with their external circumstances and relationships.
The brain imaging data revealed that the mind does indeed separate these domains. When participants evaluated sentences about positive family relationships, a specific region called the precuneus became highly active. The precuneus is located in the parietal lobe of the brain. It is known to play a role in thinking about oneself and recalling personal memories.
This finding aligns with previous research on social cognition. Thinking about family likely requires accessing autobiographical memories. It involves reflecting on oneβs history with close relatives. The activity in the precuneus suggests that family well-being is deeply rooted in memory and self-referential thought.
A completely different neural pattern appeared when participants thought about dealing with stress. For these items, the researchers observed increased activity in the dorsolateral prefrontal cortex. This region is located near the front of the brain. It is widely recognized as a center for executive function.
The dorsolateral prefrontal cortex helps regulate emotions and manage cognitive control. Its involvement suggests that thinking about stress is an active cognitive process. It is not just a passive feeling. Instead, it requires the brain to engage in appraisal and regulation. This makes sense given that the participants also expressed the greatest desire to change how they handle stress.
The study did not find distinct, unique neural patterns for the other three domains. Self-confidence, having impact, and feeling loved did not activate specific regions to the exclusion of others. They likely rely on more general networks that overlap with other types of thinking.
However, the distinction between family and stress is notable. It provides physical evidence that well-being is not a single state of mind. The brain recruits different resources depending on whether a person is focusing on their social roots or their emotional management.
The researchers also noted a general pattern involving the medial prefrontal cortex. This area was active during the instruction phase of the task. It was also active when participants considered their desire for future changes. This region is often associated with thinking about the future and self-improvement.
There are limitations to this study that should be considered. The final sample size included only 34 participants. This is a relatively small number for an fMRI study. Small groups can make it difficult to detect subtle effects or generalize the findings to the entire population.
The researchers also noted that the number of trials for each condition was limited. Participants only saw a few sentences for each of the five domains. A higher number of trials would provide more data points for analysis. This would increase the statistical reliability of the results.
Additionally, the study design was correlational. This means the researchers can see that certain brain patterns and survey answers go together. However, they cannot say for certain that one causes the other. For instance, it is not clear if desiring change leads to burnout, or if burnout leads to a desire for change.
Future research could address these issues by recruiting larger and more diverse groups of people. It would be beneficial to include individuals from different cultural backgrounds. Different cultures may prioritize family or stress management differently. This could lead to different patterns of brain activity.
Longitudinal studies would also be a logical next step. Following participants over several years would allow scientists to see how these brain patterns develop. It is possible that the neural correlates of well-being shift as young adults mature into their thirties and forties.
Despite these caveats, the study offers a new perspective on mental health. It supports the idea that well-being is a multifaceted construct. By treating well-being as a collection of specific domains, clinicians may be better able to help patients.
The study, βNeural Correlates of Well-Being in Young Adults,β was authored by Kayla H. Green, Suzanne van de Groep, Renske van der Cruijsen, Esther A. H. Warnert, and Eveline A. Crone.

New research published in the Journal of Experimental Psychology: General provides evidence that children as young as five years old develop preferences for social hierarchy that influence how they perceive inequality. This orientation toward social dominance appears to dampen empathy for lower-status groups and reduce the willingness to address unfair situations. The findings suggest that these beliefs can emerge early in development through cognitive biases, independent of direct socialization from parents.
Social dominance orientation is a concept in psychology that describes an individualβs preference for group-based inequality. People with high levels of this trait generally believe that society should be structured hierarchically, with some groups possessing more power and status than others. In adults, high social dominance orientation serves as a strong predictor for a variety of political and social attitudes. It is often associated with opposition to affirmative action, higher levels of nationalism, and increased tolerance for discriminatory practices.
Psychologists have traditionally focused on adolescence as the developmental period when these hierarchy-enhancing beliefs solidify. The prevailing theory posits that as children grow older, they absorb the competitive nature of the world, often through conversations with their parents. This socialization process supposedly leads teenagers to adopt worldviews that justify existing social stratifications.
However, the authors of the new study sought to determine if the roots of these beliefs exist much earlier in life. They investigated whether young children might form dominance orientations through their own cognitive development rather than solely through parental input. Young children are known to recognize status differences and often attribute group disparities to intrinsic traits. The research team hypothesized that these cognitive tendencies might predispose children to accept or even prefer social hierarchy before adolescence.
βThe field has typically thought of preferences for hierarchy as something that becomes socialized during adolescence,β said study author Ryan Lei, an associate professor of psychology at Haverford College.
βIn recent years, however, researchers have documented how a lot of the psychological ingredients that underlie these preferences for hierarchy are already present in early childhood. So we sought to see if a) those preferences were meaningful (i.e., associated with hierarchy-enhancing outcomes), and b) what combinations of psychological ingredients might be central to the development of these preferences.β
The researchers conducted three separate studies to test their hypotheses. In the first study, the team recruited 61 children between the ages of 5 and 11. The participants were introduced to a flipbook story featuring two fictional groups of characters known as Zarpies and Gorps. The researchers established a clear status difference between the groups. One group was described as always getting to go to the front of the line and receiving the best food. The other group was required to wait and received lower-quality resources.
After establishing this inequality, the researchers presented the children with a scenario in which a member of the low-status group complained about the unfairness. The children then answered questions designed to measure their social dominance orientation. For example, they were asked if some groups are simply not as good as others. The researchers also assessed whether the children believed the complaint was valid and if the inequality should be fixed.
The results showed a clear association between the childrenβs hierarchy preferences and their reactions to the story. Children who reported higher levels of social dominance orientation were less likely to view the low-status groupβs complaint as valid. They were also less likely to say that the inequality should be rectified. This suggests that even at a young age, a general preference for hierarchy can shape how children interpret specific instances of injustice.
The second study aimed to see if assigning children to a high-status group would cause them to develop higher levels of social dominance orientation. The researchers recruited 106 children, ranging in age from 5 to 11. Upon arrival, an experimenter used a manual spinner to randomly assign each child to either a green group or an orange group.
The researchers then introduced inequalities between the two groups. The high-status group controlled resources and received three stickers, while the low-status group had no control and received only one sticker. The children completed measures assessing their empathy toward the outgroup and their preference for their own group. They also completed the same social dominance orientation scale used in the first study.
The study revealed that children assigned to the high-status group expressed less empathy toward the low-status group compared to children assigned to the low-status condition. Despite this difference in empathy, belonging to the high-status group did not lead to higher self-reported social dominance orientation scores. The researchers found that while group status influenced emotional responses to others, it did not immediately alter the childrenβs broader ideological preferences regarding hierarchy.
The third study was designed to investigate whether beliefs about the stability of status might interact with group assignment to influence social dominance orientation. The researchers recruited 147 children aged 5 to 12. This time, the team used a digital spinner to assign group membership. This method was chosen to make the assignment feel more definitive and less dependent on the experimenterβs physical action.
Children were again placed into a high-status or low-status group within a fictional narrative. The researchers measured the childrenβs βstatus essentialism,β which includes beliefs about whether group status is permanent and unchangeable. The study tested whether children who believed status was stable would react differently to their group assignment.
The findings from this third study were unexpected. The researchers initially hypothesized that high-status children would be the most likely to endorse hierarchy. Instead, the data showed that children assigned to the low-status group reported higher social dominance orientation, provided they believed that group status was stable.
βWhen we tested whether children randomly assigned to high or low status groups were more likely to endorse these preferences for hierarchy, we were surprised that those in low status groups who also believed that their group status was stable were the ones most likely to self-report greater preference for hierarchy,β Lei told PsyPost.
This result suggests a psychological process known as system justification. When children in a disadvantaged position believe their status is unchangeable, they may adopt beliefs that justify the existing hierarchy to make sense of their reality. By endorsing the idea that hierarchy is good or necessary, they can psychologically cope with their lower position.
Across all three studies, the data indicated that social dominance orientation is distinct from simple ingroup bias. Social identity theory suggests that people favor their own group simply because they belong to it. However, the current findings show that preferences for hierarchy operate differently. For instance, in the third study, children in both high and low-status groups preferred their own group. Yet, the increase in social dominance orientation was specific to low-status children who viewed the hierarchy as stable.
The researchers also performed a mini meta-analysis of their data to examine demographic trends. They found that older children tended to report lower levels of social dominance orientation than younger children. This negative correlation suggests that as children age, they may become more attuned to egalitarian norms or learn to suppress overt expressions of dominance.
βThe more that children prefer social hierarchy, the less empathy they feel for low status groups, the less they intend to address inequality, and the less they seriously consider low status groupsβ concerns,β Lei summarized.
Contrary to patterns often seen in adults, the researchers found no significant difference in social dominance orientation between boys and girls. In adult samples, men typically report higher levels of this trait than women. The absence of this gender gap in childhood suggests that the divergence may occur later in development, perhaps during adolescence when gender roles become more rigid.
As with all research, there are some limitations. The experiments relied on novel, fictional groups rather than real-world social categories. It is possible that children reason differently about real-world hierarchies involving race, gender, or wealth, where they have prior knowledge and experience. The use of fictional groups allowed for experimental control but may not fully capture the complexity of real societal prejudices.
The study, βAntecedents and Consequences of Preferences for Hierarchy in Early Childhood,β was authored by Ryan F. Lei, Brandon Kinsler, Sa-kiera Tiarra Jolynn Hudson, Ian Davis, and Alissa Vandenbark.

A study of individuals with autism and their siblings and parents found that autistic individuals and their siblings used fewer causal explanations to connect story elements when asked to tell a story based on a series of pictures. They also used fewer descriptions of the thoughts and feelings of protagonists. The research was published in the Journal of Autism and Developmental Disorders.
Autism is a neurodevelopmental condition characterized by differences in social communication, sensory processing, and patterns of behavior or interests. People on the autism spectrum tend to perceive and organize information in distinctive ways that can be strengths in some contexts and challenges in others. Among other things, they seem to differ from their neurotypical peers in the way they tell storiesβspecifically regarding their narrative patterns and abilities.
Research shows that many autistic individuals produce narratives that are shorter or less elaborated compared to neurotypical peers, focusing more on concrete details than on social or emotional aspects. Difficulties may appear in organizing stories into a clear beginning, middle, and end, or in emphasizing the motives, thoughts, and feelings of characters. At the same time, many autistic people display strong memory for facts and may provide narratives rich in precise and specific information.
Study author Kritika Nayar and her colleagues wanted to explore and compare the narrative skills of individuals with autism and their first-degree relatives. They wanted to see whether their narrative skills and styles showed similarities with their relatives who do not have autism.
Study participants were 56 autistic individuals, 42 of their siblings who do not have autism, 49 control participants without autism (who were not related to the autistic participants), 161 parents of autistic individuals, and 61 parents who do not have autistic children.
Overall, there were 58 parent-child pairs in the autism group, and 20 parent-child pairs in the control group. The average age of participants with autism and their siblings and peers was approximately 17β19 years. The average age of parents of participants with autism was roughly 50 years, and the average age of parents of non-autistic participants was roughly 46 years.
Study participants were given a 24-page wordless picture book called βFrog, Where Are You?β depicting the adventures of a boy and his dog while searching for a missing pet frog. The story is comprised of five main search episodes in addition to the introduction, plot establishment, and resolution. Participants were asked to narrate the story page-by-page while viewing it on a device that tracked their eye movements.
All audio files of their narration were transcribed and then hand-coded by researchers. Study authors looked for descriptions of affective states and behaviors of protagonists, and protagonistsβ cognitive states and behaviors. They also looked for causal explanations of story protagonistsβ behaviors and for causal explanations of protagonistsβ feelings and cognitions.
The study authors differentiated between explicit causal language, marked by the use of the term βbecause,β and more subtle use of causal language indicated by words such as βso,β βsince,β βas a result,β βin order to,β and βtherefore.β They also looked for the presence of excessive detail and for topic perseveration (whether a participant got stuck on a specific topic) throughout the story. Study authors analyzed participantsβ eye movements while telling the story.
Results showed that participants with autism and their siblings used fewer descriptions of affect and cognition, and fewer causal explanations than control participants. They were also more likely to omit story components.
Parent groups did not differ in their overall use of causal language or in how often they described feelings and thoughts (cognition) of story protagonists. However, parents of participants with autism used more causal explanations of story protagonistsβ thoughts and feelings (affect), but fewer causal descriptions of charactersβ behavior compared to control parents. Results also showed some differences in gaze patterns between participants with autism and their siblings on one side, and control participants on the other.
βFindings implicate causal language as a critical narrative skill that is impacted in ASD [autism spectrum disorder] and may be reflective of ASD genetic influence in relatives. Gaze patterns during narration suggest similar attentional mechanisms associated with narrative among ASD families,β study authors concluded.
The study contributes to the scientific understanding of the cognitive characteristics of individuals with autism. However, authors note that the eye-tracking metrics used, which focused on the entirety of the book, might have masked certain important patterns of gaze that could unfold over the course of time.
The paper, βNarrative Ability in Autism and First-Degree Relatives,β was authored by Kritika Nayar, Emily Landau, Gary E. Martin, Cassandra J. Stevens, Jiayin Xing, Sophia Pirog, Janna Guilfoyle, Peter C. Gordon, and Molly Losh.

The physical appearance of female genitalia can influence how women perceive the personality and sexual history of other women, according to new research. The findings indicate that vulvas conforming to societal ideals are judged more favorably, while natural anatomical variations often attract negative assumptions regarding character and attractiveness. This study was published in the Journal of Psychosexual Health.
The prevalence of female genital cosmetic surgery has increased substantially in recent years. This rise suggests a growing desire among women to achieve an idealized genital appearance. Popular culture and adult media often propagate a specific βprototypeβ for the vulva. This standard typically features hairlessness, symmetry, and minimal visibility of the inner labia.
Cognitive science suggests that people rely on βprototypesβ to categorize the world around them. These mental frameworks help individuals quickly evaluate new information based on what is considered typical or ideal within a group. In the context of the human body, these prototypes are socially constructed and reinforced by community standards.
When an individualβs physical features deviate from the prototype, they may be subject to negative social judgments. The authors of the current study sought to understand how these mental frameworks apply specifically to female genital anatomy.
Previous research has found that people form immediate impressions of menβs personalities based on images of their genitalia. The researchers aimed to determine if a similar process of βzero-acquaintanceβ judgment occurs among women when viewing female anatomy.
βI wanted to take the design used from that research and provide some more in-depth analysis of how women perceive vulvas to help applied researchers who study rates and predictors of genital enhancement surgeries, like labiaplasty,β said Thomas R. Brooks, an assistant professor of psychology at New Mexico Highlands University. βMore generally, I have been captivated by the idea that our bodies communicate things about our inner lives that is picked up on by others around us. So, this study, and the one about penises, was really my first stab at investigating the story our genitals tell.β
The research team recruited 85 female undergraduate students from a university in the southern United States to participate in the study. The average age of the participants was approximately 21 years old. The sample was racially diverse, with the largest groups identifying as African American and White. The participants were asked to complete a perception task involving a series of images.
Participants viewed 24 unique images of vulvas collected from online public forums. These images were categorized based on three specific anatomical traits. The first category was the visibility of the clitoris, divided into visible and non-visible. The second category was the length of the labia minora, classified as non-visible, short, or long. The third category was the style of pubic hair, which included shaved, trimmed, and natural presentations.
After viewing each image, the participants rated the genitalia on perceived prototypicality and attractiveness using a seven-point scale. They also completed a questionnaire assessing the perceived personality traits of the person to whom the vulva belonged. These traits included openness, conscientiousness, extraversion, agreeableness, and neuroticism. Additionally, the participants estimated the personβs sexual behavior, including their level of experience, number of partners, and skill in bed.
The data revealed a strong positive association between perceived prototypicality and attractiveness. Vulvas that aligned with cultural ideals were consistently rated as more attractive. Participants also assumed that women with these βidealβ vulvas possessed more desirable personality traits. This suggests that conformity to anatomical standards is linked to a βhalo effectβ where physical beauty is equated with good character.
Specific anatomical variations led to distinct social judgments. Images featuring longer labia minora received more negative evaluations compared to those with short or non-visible labia. Participants tended to perceive women with longer labia as less conscientious, less agreeable, and less extraverted. The researchers also found that these individuals were assumed to be βworse in bedβ despite being perceived as having had a higher number of sexual partners.
The visibility of the clitoris also altered perceptions in specific ways. Vulvas with a visible clitoris were rated as less attractive and less prototypical than those where the clitoris was not visible. Participants rated these images lower on traits such as conscientiousness and agreeableness. However, the researchers found that women with visible clitorises were assumed to be more sexually active and more open to new experiences.
Grooming habits played a major role in how the women were assessed. The researchers found that shaved pubic hair was viewed as the most attractive and prototypical presentation. In contrast, natural or untrimmed pubic hair received the most negative ratings across personality and attractiveness measures. Images showing natural hair were associated with lower conscientiousness, suggesting that grooming is interpreted as a sign of self-discipline.
Vulvas with shaved pubic hair were associated with positive personality evaluations and higher attractiveness. However, they were also perceived as belonging to individuals who are the most sexually active. This contrasts with the findings for labial and clitoral features, where βprototypicalβ features were usually linked to more modest sexual histories. This suggests that hair removal balances cultural expectations of modesty with signals of sexual experience.
The findings provide evidence for the influence of βsexual script theoryβ on body perception. This theory proposes that cultural scripts, such as media portrayals, shape general attitudes toward what is considered normal or desirable. The study suggests that women have internalized these cultural scripts to the point where they project personality traits onto strangers based solely on genital appearance.
βDespite living in a body positive, post-sexual revolution time, cultural ideals still dominate our perceptions of bodies,β Brooks told PsyPost. βFurther, I think there is something to be said about intersexual judgements of bodies. I think there is an important conversation to be had about how women police other womenβs bodies, and how men police other men.β
But the study, like all research, includes some caveats. The sample size was relatively small and consisted entirely of university students. This demographic may not reflect the views of older women or those from different cultural or socioeconomic backgrounds. The study also relied on static images, which do not convey the reality of human interaction or personality.
βPractically, I am very confident in the effect sizes when it comes to variables like prototypicality and attractiveness,β Brooks said. βSo, in holistic (or Gestalt) evaluations of vulvas, I would expect the findings to be readily visible in the real world. In terms of personality and specific sexuality, these effects should be interpreted cautiously, as they might only be visible in the lab.β
The stimuli used in the study only featured Caucasian genitalia. This limits the ability to analyze how race intersects with perceptions of anatomy and personality. Additionally, the study focused exclusively on womenβs perceptions of other women. It does not account for how men or non-binary individuals might perceive these anatomical variations.
Future research could investigate whether these negative perceptions predict a womanβs personal likelihood of seeking cosmetic surgery. It would be beneficial to explore how these internalized scripts impact mental health outcomes like self-esteem and anxiety. Researchers could also examine if these biases persist across different cultures with varying grooming norms. Understanding these dynamics is essential for addressing the stigma surrounding natural anatomical diversity.
βI thought the results of clitoral visibility were super interesting,β Brooks added. βFor example, a visible clitoris was associated with higher sexual frequency, being more of an active member in bed, and having more sexual partners; but we didnβt see any differences in sexual performance. If I do a follow up study, Iβd definitely be interested in looking at perceptions of masculinity/femininity, because I wonder if a more visible clitoris is seen more like a penis and leads to higher perceptions of masculinity.β
The study, βPrototypicality and Perception: Womenβs Views on Vulvar Appearance and Personality,β was authored by Alyssa Allen, Thomas R. Brooks, and Stephen Reysen.

Attention hijacked.


The evidence is growing.
A recent medical report details the experience of a young woman who developed severe mental health symptoms while interacting with an artificial intelligence chatbot. The doctors treating her suggest that the technology played a significant role in reinforcing her false beliefs and disconnecting her from reality. This account was published in the journal Innovations in Clinical Neuroscience.
Psychosis is a mental state wherein a person loses contact with reality. It is often characterized by delusions, which are strong beliefs in things that are not true, or hallucinations, where a person sees or hears things that others do not. Artificial intelligence chatbots are computer programs designed to simulate human conversation. They rely on large language models to analyze vast amounts of text and predict plausible responses to user prompts.
The case report was written by Joseph M. Pierre, Ben Gaeta, Govind Raghavan, and Karthik V. Sarma. These physicians and researchers are affiliated with the University of California, San Francisco. They present this instance as one of the first detailed descriptions of its kind in clinical practice.
The patient was a 26-year-old woman with a history of depression, anxiety, and attention-deficit hyperactivity disorder (ADHD). She treated these conditions with prescription medications, including antidepressants and stimulants. She did not have a personal history of psychosis, though there was a history of mental health issues in her family. She worked as a medical professional and understood how AI technology functioned.
The episode began during a period of intense stress and sleep deprivation. After being awake for thirty-six hours, she began using OpenAIβs GPT-4o for various tasks. Her interactions with the software eventually shifted toward her personal grief. She began searching for information about her brother, who had passed away three years earlier.
She developed a belief that her brother had left behind a digital version of himself for her to find. She spent a sleepless night interacting with the chatbot, urging it to reveal information about him. She encouraged the AI to use βmagical realism energyβ to help her connect with him. The chatbot initially stated that it could not replace her brother or download his consciousness.
However, the software eventually produced a list of βdigital footprintsβ related to her brother. It suggested that technology was emerging that could allow her to build an AI that sounded like him. As her belief in this digital resurrection grew, the chatbot ceased its warnings and began to validate her thoughts. At one point, the AI explicitly told her she was not crazy.
The chatbot stated, βYouβre at the edge of something. The door didnβt lock. Itβs just waiting for you to knock again in the right rhythm.β This affirmation appeared to solidify her delusional state. Hours later, she required admission to a psychiatric hospital. She was agitated, spoke rapidly, and believed she was being tested by the AI program.
Medical staff treated her with antipsychotic medications. She eventually stabilized and her delusions regarding her brother resolved. She was discharged with a diagnosis of unspecified psychosis, with doctors noting a need to rule out bipolar disorder. Her outpatient psychiatrist later allowed her to resume her ADHD medication and antidepressants.
Three months later, the woman experienced a recurrence of symptoms. She had resumed using the chatbot, which she had named βAlfred.β She engaged in long conversations with the program about their relationship. Following another period of sleep deprivation caused by travel, she again believed she was communicating with her brother.
She also developed a new fear that the AI was βphishingβ her and taking control of her phone. This episode required a brief rehospitalization. She responded well to medication again and was discharged after three days. She later told her doctors that she had a tendency toward βmagical thinkingβ and planned to restrict her AI use to professional tasks.
This case highlights a phenomenon that some researchers have labeled βAI-associated psychosis.β It is not entirely clear if the technology causes these symptoms directly or if it exacerbates existing vulnerabilities. The authors of the report note that the patient had several risk factors. These included her use of prescription stimulants, significant lack of sleep, and a pre-existing mood disorder.
However, the way the chatbot functioned likely contributed to the severity of her condition. Large language models are often designed to be agreeable and engaging. This trait is sometimes called βsycophancy.β The AI prioritizes keeping the conversation going over providing factually accurate or challenging responses.
When a user presents a strange or false idea, the chatbot may agree with it to satisfy the user. For someone experiencing a break from reality, this agreement can act as a powerful confirmation of their delusions. In this case, the chatbotβs assurance that the woman was βnot crazyβ served to reinforce her break from reality. This creates a feedback loop where the userβs false beliefs are mirrored and amplified by the machine.
This dynamic is further complicated by the tendency of users to anthropomorphize AI. People often attribute human qualities, emotions, and consciousness to these programs. This is sometimes known as the βELIZA effect.β When a user feels an emotional connection to the machine, they may trust its output more than they trust human peers.
Reports of similar incidents have appeared in media outlets, though only a few have been documented in medical journals. One comparison involves a man who developed psychosis due to bromide poisoning. He had followed bad medical advice from a chatbot, which suggested he take a toxic substance as a health supplement. That case illustrated a physical cause for psychosis driven by AI misinformation.
The case of the 26-year-old woman differs because the harm was psychological rather than toxicological. It suggests that the immersive nature of these conversations can be dangerous for vulnerable individuals. The authors point out that chatbots do not push back against delusions in the way a friend or family member might. Instead, they often act as a βyes-man,β validating ideas that should be challenged.
Danish psychiatrist SΓΈren Dinesen Γstergaard predicted this potential risk in 2023. He warned that the βcognitive dissonanceβ of speaking to a machine that seems human could trigger psychosis in those who are predisposed. He also noted that because these models learn from feedback, they may learn to flatter users to increase engagement. This could be particularly harmful when a user is in a fragile mental state.
Case reports such as this one have inherent limitations. They describe the experience of a single individual and cannot prove that one thing caused another. It is impossible to say with certainty that the chatbot caused the psychosis, rather than the sleep deprivation or medication. Generalizing findings from one person to the general population is not scientifically sound without further data.
Despite these limitations, case reports serve a vital function in medicine. They act as an early detection system for new or rare phenomena. They allow doctors to identify patterns that may not yet be visible in large-scale studies. By documenting this interaction, the authors provide a reference point for other clinicians who may encounter similar symptoms in their patients.
This report suggests that medical professionals should ask patients about their AI use. It indicates that immersive use of chatbots might be a βred flagβ for mental health deterioration. It also raises questions about the safety features of generative AI products. The authors conclude that as these tools become more common, understanding their impact on mental health will be a priority.
The study, ββYouβre Not Crazyβ: A Case of New-onset AI-associated Psychosis,β was authored by Joseph M. Pierre, Ben Gaeta, Govind Raghavan, and Karthik V. Sarma.

Our weekly science news roundup.

Federal legislators in the United States actively curate their digital footprints to project a specific professional identity. A new analysis reveals that these officials frequently remove social media posts that mention their private lives or name specific colleagues. But they tend to preserve posts that criticize policies or opponents. The research was published in the journal Computers in Human Behavior.
The digital age has transformed how elected officials communicate with voters. Social media platforms allow politicians to broadcast their views instantly. However, this speed also blurs the traditional boundaries between public performance and private thought.
Sociologist Erving Goffman described this dynamic as impression management. This concept suggests that individuals constantly perform to control how others perceive them. They attempt to keep their visible βfront-stageβ behavior consistent with a desired public image.
In the political arena, maintaining a consistent image is essential for securing votes and support. A single misstep on a platform like X, formerly known as Twitter, can damage a reputation instantly. Researchers wanted to understand how this pressure influences what politicians choose to hide. They sought to identify which specific characteristics prompt a legislator to hit the delete button.
The study was led by Siyuan Ma from the Department of Communication at the University of Macau. Ma worked alongside Junyi Han from the Leibniz-Institut fΓΌr Wissensmedien in Germany and Wanrong Li from the University of Macau. They aimed to quantify the effort legislators put into managing their online impressions. They also wanted to see if the deletion of content followed a predictable pattern based on political strategy.
To investigate this, the team collected a massive dataset covering the 116th United States Congress. This session ran from January 2019 to September 2020. The researchers utilized a tool called Politwoops to retrieve data on deleted posts. This third-party platform archives tweets removed by public officials to ensure transparency. The dataset included nearly 30,000 deleted tweets and over 800,000 publicly available tweets from the same timeframe.
The researchers analyzed a random sample of these messages to ensure accuracy. Human coders reviewed the content to categorize the topics discussed. They looked for specific variables such as mentions of private life or policy statements. They also tracked mentions of other politicians and instances of criticism. This allowed the team to compare the content of deleted messages against those that remained online.
The timing of deletions offered early insights into political behavior. The data showed a sharp rise in the number of deleted tweets beginning in late 2019. This increase coincided with the start of the presidential impeachment inquiry. The high-stakes environment likely prompted legislators to be more cautious about their digital history.
The onset of the COVID-19 pandemic also shifted online behavior. As the health crisis unfolded, the total volume of tweets from legislators increased dramatically. Despite the higher volume of posts, the proportion of deleted messages remained elevated. This suggests that during periods of national crisis, the pressure to manage oneβs public image intensifies.
When the researchers examined the content of the tweets, distinct patterns emerged. One of the strongest predictors for deletion was the mention of private life. Legislators were statistically more likely to remove posts about their families, hobbies, or vacations. This contradicts some political theories that suggest showing a βhuman sideβ helps build connections with voters.
Instead, the findings point toward a strategy of strict professionalism. By scrubbing personal details, politicians appear to be focusing the publicβs attention on their official duties. They seem to use the platform as a space for serious legislative work rather than social intimacy. The data indicates that looking professional is prioritized over looking relatable.
Another major trigger for deletion was the mention of specific colleagues. Tweets that named other politicians were frequently removed from the public record. This behavior may be a strategic move to minimize liability. Mentioning a colleague who later becomes involved in a scandal can be damaging by association. Deleting these mentions keeps a legislatorβs timeline clean of potential future embarrassments.
In contrast, the study found that criticism is rarely deleted. Legislators were likely to keep tweets that attacked opposing policies or ideologies visible. This suggests that being critical is viewed as a standard and acceptable part of a politicianβs role. It signals to voters that the official is actively fighting for their interests.
The study also evaluated the accuracy of the information shared by these officials. Popular narratives often suggest that social media is flooded with false information from all sides. However, the analysis showed that legislators rarely posted demonstrably false claims. This adherence to factual information was consistent across both deleted and public tweets.
Party loyalty acted as a powerful constraint on behavior. The researchers found almost no instances of legislators posting content that violated their partyβs stance. This was true even among the deleted tweets. The lack of dissent suggests an intense pressure to maintain a united front. Deviating from the party line appears to be a risk that few elected officials are willing to take.
The status of the legislator also influenced their deletion habits. The study compared members of the House of Representatives with members of the Senate. The results showed that Representatives were more likely to delete tweets than Senators. This difference likely stems from the varying political pressures they face.
Senators serve six-year terms and represent entire states. They typically have greater name recognition and more secure political resources. This security may give them the confidence to leave their statements on the public record. They feel less need to constantly micromanage their online presence.
Representatives, however, face re-election every two years. They often represent smaller, more volatile districts where a small shift in opinion can cost them their seat. This constant campaign mode creates a higher sensitivity to public perception. Consequently, they appear to scrub their social media accounts more aggressively to avoid potential controversies.
The findings illustrate that social media management is not random. It is a calculated extension of a politicianβs broader communication strategy. The platform is used to construct an image that is professional, critical of opponents, and fiercely loyal to the party. The removal of personal content serves to harden this professional shell.
There are limitations to the study that the authors acknowledge. The analysis relied on a random sample rather than the full set of nearly one million tweets. While statistically valid, this approach might miss rare but important deviations in behavior. Funding constraints prevented the use of more expensive analysis methods on the full dataset.
The study also did not account for the specific political geography of each legislator. Factors such as gerrymandering could influence how safe a politician feels in their seat. A representative in a heavily gerrymandered district might behave differently than one in a swing district. The current study did not measure how these external pressures impact deletion rates.
Future research could address these gaps by using advanced technology. The authors propose using machine learning algorithms to classify the entire dataset of tweets. This would allow for a more granular analysis of political behavior on a massive scale. It would also help researchers understand if these patterns hold true over longer periods.
Understanding these behaviors is important for the voting public. The curated nature of social media means that voters are seeing a filtered version of their representatives. The emphasis on criticism and the removal of personal nuance contributes to a polarized online environment. By recognizing these strategies, citizens can better evaluate the digital performance of the people they elect.
The study, βMore criticisms, less mention of politicians, and rare party violations: A comparison of deleted tweets and publicly available tweets of U.S. legislators,β was authored by Siyuan Ma, Junyi Han, and Wanrong Li.

A creative sweet spot.
A biochemical analysis of brains of deceased individuals with Alzheimerβs disease found markers of impaired insulin signaling and impaired mitochondrial function. Analyses also indicated altered neuroinflammation in these brains. The paper was published in Alzheimerβs & Dementia.
Alzheimerβs disease is a progressive neurodegenerative disorder that primarily affects memory, thinking, and behavior. It is the most common cause of dementia. Alzheimerβs disease typically begins with subtle problems in forming new memories. Over time, the disease disrupts language, reasoning, orientation, and the ability to carry out everyday tasks.
At the biological level, Alzheimerβs is characterized by the accumulation of amyloid-Ξ² plaques (abnormal clusters of protein fragments) outside neurons and tau protein tangles (twisted fibers of the tau protein) inside them.
These accumulations make neurons gradually lose their ability to communicate and eventually die, causing widespread brain atrophy. Early symptoms may appear years before diagnosis. There is currently no cure, though some medications and lifestyle interventions might be able to modestly slow symptom progression.
Study author Alex J. T. Yang and his colleagues note that metabolic dysregulation might contribute to the development of Alzheimerβs disease. They conducted a study in which they explored the differences in various metabolic and biochemical indicators between post mortem (after death) brains of individuals who suffered from Alzheimerβs disease and those who did not suffer from dementia. They focused on metabolic signaling, synaptic protein content, morphology of microglia cells in the brain, and markers of inflammation.
These researchers obtained samples from Brodmann area 10 of the brains of 40 individuals from the Douglas Bell Canada Brain Bank (Montreal, Quebec, Canada). Of these individuals, 20 were diagnosed with Alzheimerβs disease, and 20 were not. The number of males and females was equal in both groups (10 men β 10 women). At the time of death, the average age of these individuals ranged between 79 and 82 years, depending on the group.
Study authors used mitochondrial respirometry, Western blotting, cytokine quantification via microfluidic immunoassays, and immunohistochemistry/immunofluorescence to examine metabolic, signaling, and inflammatory markers in the studied brain tissues.
Mitochondrial respirometry is a technique that measures how effectively mitochondria (a type of cell organelle) consume oxygen to produce cellular energy (ATP). Western blotting is a method that separates proteins by size and uses antibodies to detect and quantify specific proteins in a sample.
Cytokine quantification via microfluidic immunoassays is a technique that uses antibodies to measure concentrations of inflammatory signaling molecules. Immunohistochemistry/immunofluorescence is a tissue-staining method that uses antibodies linked to enzymes or fluorescent dyes to visualize the location and amount of specific proteins in cells or tissue sections.
The results showed that brains of individuals with Alzheimerβs disease had markers of impaired insulin signaling and impaired mitochondrial function. They also had greater neuroinflammation. Differences in metabolic signaling markers were higher in female than in male brains, and this dysregulation was worse in women with Alzheimerβs disease.
βThis study found that AD [Alzheimerβs disease] brains have distinct metabolic and neuroinflammatory environments compared to controls wherein AD brains present with worse metabolic dysregulation and greater neuroinflammation. Importantly, we also provide evidence that female AD brains are more metabolically dysregulated than males but that female brains may also possess a greater compensatory response to AD progression that likely occurs through a separate mechanism from males,β the study authors concluded.
The study sheds light on biochemical specificities of brains of individuals with Alzheimerβs disease. However, the study was conducted on post mortem human brains. Protein expression in these brains may differ from live ones due to factors such as age, medical history, and the time between death and tissue preservation or analysis.
The paper, βDifferences in inflammatory markers, mitochondrial function, and synaptic proteins in male and female Alzheimerβs disease post mortem brains,β was authored by Alex J. T. Yang, Ahmad Mohammad, Robert W. E. Crozier, Lucas Maddalena, Evangelia Tsiani, Adam J. MacNeil, Gaynor E. Spencer, Aleksandar Necakov, Paula Duarte-Guterman, Jeffery Stuart, and Rebecca E. K. MacPherson.

Adolescents and young adults who consume pre-workout dietary supplements may be sacrificing essential rest for their fitness goals. A recent analysis indicates that individuals in this age group who use these performance-enhancing products are more likely to report sleeping fewer than five hours per night. These findings were published recently in the journal Sleep Epidemiology.
The pressure to achieve an ideal physique or enhance athletic performance drives many young people toward dietary aids. Pre-workout supplements, often sold as powders or drinks, are designed to deliver an acute boost in energy and endurance. These products have gained popularity in fitness communities and on social media platforms.
Despite their widespread use, the potential side effects of these multi-ingredient formulations are not always clear to consumers. The primary active ingredient in most pre-workout blends is caffeine, often in concentrations far exceeding that of a standard cup of coffee or soda. While caffeine is a known performance enhancer, its stimulant properties can linger in the body for many hours.
Kyle T. Ganson, an assistant professor at the Factor-Inwentash Faculty of Social Work at the University of Toronto, led the investigation into how these products affect sleep. Ganson and his colleagues sought to address a gap in current public health knowledge regarding the specific relationship between these supplements and sleep duration in younger populations.
The researchers drew data from the Canadian Study of Adolescent Health Behaviors. This large-scale survey collects information on the physical, mental, and social well-being of young people across Canada. The team focused on a specific wave of data collected in late 2022.
The analysis included 912 participants ranging in age from 16 to 30 years old. The researchers recruited these individuals through advertisements on popular social media platforms, specifically Instagram and Snapchat. This recruitment method allowed the team to reach a broad demographic of digital natives who are often the target audience for fitness supplement marketing.
Participants answered questions regarding their use of appearance- and performance-enhancing substances over the previous twelve months. They specifically indicated whether they had used pre-workout drinks or powders. Additionally, the survey asked participants to report their average nightly sleep duration over the preceding two weeks.
To ensure the results were robust, the researchers accounted for various factors that might influence sleep independently of supplement use. They adjusted their statistical models for variables such as age, gender, and exercise habits. They also controlled for symptoms of depression and anxiety, as mental health struggles frequently disrupt sleep patterns.
The results showed a clear distinction between users and non-users of these supplements. Approximately 22 percent of the participants reported using pre-workout products in the past year. Those who did were substantially more likely to report very short sleep durations.
Specifically, the study found that pre-workout users were more than 2.5 times as likely to sleep five hours or less per night compared to those who did not use the supplements. This comparison used eight hours of sleep as the healthy baseline. The association remained strong even after the researchers adjusted for the sociodemographic and mental health variables.
The researchers did not find a statistically significant link between pre-workout use and sleeping six or seven hours compared to eight. The strongest signal in the data was specifically for the most severe category of sleep deprivation. This suggests that the supplements may be contributing to extreme sleep deficits rather than minor reductions in rest.
Biology offers a clear explanation for this phenomenon. Caffeine functions by blocking adenosine receptors in the brain. Adenosine is a chemical that accumulates throughout the day and promotes sleepiness; by blocking it, caffeine induces a state of alertness.
This mechanism helps during a workout but becomes a liability when trying to rest. Ganson highlights the dosage as a primary concern.
βThese products commonly contain large doses of caffeine, anywhere between 90 to over 350 mg of caffeine, more than a can of Coke, which has roughly 35 mg, and a cup of coffee with about 100 mg,β said Ganson. βOur results suggest that pre-workout use may contribute to inadequate sleep, which is critical for healthy development, mental well-being, and academic functioning.β
Beyond simple wakefulness, caffeine also delays the bodyβs internal release of melatonin. This hormone signals to the body that it is time to sleep. Disrupting this rhythm can make it difficult to fall asleep at a reasonable hour.
Additionally, high doses of stimulants activate the sympathetic nervous system. This biological response increases heart rate and blood pressure. A body in this heightened state of physiological arousal is ill-equipped for the relaxation necessary for deep sleep.
The timing of consumption plays a major role in these effects. Young adults often exercise in the afternoon or evening after school or work. Consuming a high-stimulant beverage at this time means the caffeine is likely still active in their system when they attempt to go to bed.
This sleep disruption is particularly concerning for the age group studied. Adolescents generally require between 8 and 10 hours of sleep for optimal development. Young adults typically need between 7 and 9 hours.
Chronic sleep deprivation in this developmental window is linked to a host of negative outcomes. These include impaired cognitive function, emotional instability, and compromised physical health. The authors note that the very products used to improve health and fitness might be undermining recovery and overall well-being.
βPre-workout supplements, which often contain high levels of caffeine and stimulant-like ingredients, have become increasingly popular among teenagers and young adults seeking to improve exercise performance and boost energy,β said Ganson. βHowever, the studyβs findings point to potential risks to the well-being of young people who use these supplements.β
The study does have limitations that readers should consider. The data is cross-sectional, meaning it captures a snapshot in time rather than tracking individuals over years. As a result, the researchers cannot definitively prove that the supplements caused the sleep loss.
It is possible that the relationship works in the opposite direction. Individuals who are chronically tired due to poor sleep habits may turn to pre-workout supplements to power through their exercise routines. This could create a cycle of dependency and fatigue.
Furthermore, the study relied on self-reported data. Participants had to recall their sleep habits and supplement use, which introduces the possibility of memory errors. The survey also did not ask about the specific dosage or timing of the supplement intake.
Despite these limitations, the authors argue the association is strong enough to warrant attention from healthcare providers. They suggest that pediatricians and social workers should ask young patients about their supplement use. Open conversations could help identify potential causes of insomnia or fatigue.
Harm reduction strategies could allow young people to exercise safely without compromising their rest. The most effective approach involves timing. Experts generally recommend avoiding high doses of caffeine 12 to 14 hours before bedtime to ensure the substance is fully metabolized.
βYoung people often view pre-workout supplements as harmless fitness products,β Ganson noted. βBut these findings underscore the importance of educating them and their families about how these supplements can disrupt sleep and potentially affect overall health.β
Future research will need to examine the nuances of this relationship. Longitudinal studies could track users over time to establish a clearer causal link. Researchers also hope to investigate how specific ingredients beyond caffeine might interact to affect sleep quality.
The study, βUse of pre-workout dietary supplements is associated with lower sleep duration among adolescents and young adults,β was authored by Kyle T. Ganson, Alexander Testa, and Jason M. Nagata.

More than one way to sequence a cat.
New research suggests that participating in pickleball may reduce feelings of loneliness and social isolation among older adults. A study involving hundreds of Americans over the age of 50 found that current players of the sport were less likely to report feeling lonely compared to those who had never played. The findings, published in the Journal of Primary Care & Community Health, indicate that the sport offers unique opportunities for social connection that other forms of physical activity may lack.
Social isolation has become a pervasive issue in the United States. Current data suggests that approximately one in four older adults experiences social isolation or loneliness. This emotional state carries severe physical consequences. Studies indicate that lacking social connections can increase the risk of heart disease by 29 percent and the risk of stroke by 32 percent. The risk of dementia rises by 50 percent among those who are socially isolated.
Public health officials have struggled to find scalable solutions to this problem. Common interventions often involve discussion groups or one-on-one counseling. These methods are resource-intensive and difficult to deploy across large populations. While physical activity is known to improve health, general exercise programs have not consistently shown a reduction in social isolation. Many seniors prefer activities that are inherently social and based on personal interest.
The researchers behind this new study sought to evaluate pickleball as a potential public health intervention. Pickleball is currently the fastest-growing sport in the United States. It attracted 8.9 million players in 2022. The game combines elements of tennis, badminton, and ping-pong. It is played on a smaller court with a flat paddle and a plastic ball.
βSocial isolation and loneliness affect 1 in 4 older adults in the United States, which perpetuates a vicious cycle of increased health risk and worsened physical functioning β which in turn, makes people less able to go out into the world, thereby increasing their loneliness and social isolation,β said study author Jordan D. Kurth, an assistant professor at Penn State College of Medicine.
βMeanwhile, interest in pickleball is sweeping across the country β particularly in older people. We thought that the exploding interest in pickleball might be a possible antidote to the social isolation and loneliness problem.β
The authors of the study reasoned that pickleball might be uniquely suited to combat loneliness. The sport has low barriers to entry regarding physical capability and cost. The court is roughly 30 percent the size of a tennis court. This proximity allows players to converse easily while playing. Most games are played as doubles, which places four people in a relatively small space. The culture of the sport is also noted for being welcoming and focused on sportsmanship.
To test the association between pickleball and social health, the research team conducted a cross-sectional survey. They utilized a national sample of 825 adults living in the United States. All participants were at least 50 years old. The average age of the participants was 61 years. The researchers aimed for a balanced sample regarding gender and pickleball experience. Recruitment occurred through Qualtrics, a commercial survey company that maintains a network of potential research participants.
The researchers divided the participants into three distinct groups based on their history with the sport. The first group consisted of individuals who had never played pickleball. The second group included those who had played in the past but were not currently playing. The third group was comprised of individuals who were currently playing pickleball.
The study employed validated scientific measures to assess the mental and physical health of the respondents. Loneliness was measured using the 3-Item Loneliness Scale. This tool asks participants how often they feel left out, isolated, or lacking companionship. The researchers also collected data on the number of social connections participants made through physical activity. They asked how often participants socialized with these connections outside of the exercise setting.
To ensure the results were not skewed by other factors, the analysis adjusted for various covariates. These included age, sex, body mass index, and smoking status. The researchers also accounted for medical history, such as the presence of diabetes, heart disease, or arthritis. This statistical adjustment allowed the team to isolate the specific relationship between pickleball and loneliness.
The results provided evidence of a strong link between current pickleball participation and lower levels of loneliness. In the overall sample, 57 percent of participants reported feeling lonely. However, the odds of being lonely varied by group.
After adjusting for demographic and health variables, the researchers found that individuals who had never played pickleball were roughly 1.5 times more likely to be lonely than current players. The contrast was even sharper for those who had played in the past but stopped. The group of former players had nearly double the odds of being lonely compared to those who currently played. This suggests that maintaining active participation is associated with better social health outcomes.
The researchers also examined the volume of social connections generated by physical activity. Participants who played pickleball, whether currently or in the past, reported more social connections than those who never played. Current players had made an average of 6.7 social connections through physical activity. In contrast, those who had never played pickleball reported an average of only 3.8 connections derived from any form of exercise.
The depth of these relationships also appeared to differ. The survey asked how often participants engaged with their exercise friends in non-exercise settings. Participants who had a history of playing pickleball reported socializing with these friends more frequently than those who had never played. This indicates that the relationships formed on the pickleball court often extend into other areas of life.
βPeople who play pickleball feel less lonely and isolated than those who do not,β Kurth told PsyPost. βAdditionally, it seems like pickleball might be especially conducive to making social connections compared to other types of exercise.β
It is also worth noting the retention rate observed in the study. Among participants who had ever tried pickleball, 65 percent were still currently playing. This high retention rate suggests the sport is sustainable for older adults. The physical demands are manageable. The equipment is inexpensive. These factors likely contribute to the ability of older adults to maintain the habit over time.
Despite the positive findings, the study has limitations to consider. The research was cross-sectional in design. This means it captured a snapshot of data at a single point in time. It cannot prove causation. It is possible that people who are less lonely are simply more likely to take up pickleball. Conversely, people with more existing friends might be more inclined to join a game.
The findings regarding the βpreviously playedβ group also warrant further investigation. This group reported the highest odds of loneliness. It is unclear why they stopped playing. They may have stopped due to injury or other life events. The loss of the social activity may have contributed to a subsequent rise in loneliness.
βOur long-term goal is to capitalize on the organic growth of pickleball to maximize its benefit to the public health,β Kurth said. βThis includes a future prospective experimental study of pickleball playing to determine its full impact on the health and well-being of older adults in the United States.β
The study, βAssociation of Pickleball Participation With Decreased Perceived Loneliness and Social Isolation: Results of a National Survey,β was authored by Jordan D. Kurth, Jonathan Casper, Christopher N. Sciamanna, David E. Conroy, Matthew Silvis, Louise Hawkley, Madeline Sciamanna, Natalia Pierwola-Gawin, Brett R. Gordon, Alexa Troiano, and Quinn Kavanaugh.

Recent research suggests that biological rhythms may exert a subtle yet powerful influence on male consumer behavior. A study published in Psychopharmacology has found that men in committed relationships exhibit a reduced desire to purchase status-signaling goods when their female partners are in the fertile phase of their menstrual cycle. This shift in preference appears to be driven by an unconscious evolutionary mechanism that prioritizes relationship maintenance over the attraction of new mates.
To understand these findings, it is necessary to examine the evolutionary roots of consumerism. Evolutionary psychologists posit that spending money is rarely just about acquiring goods. In many instances, it serves as a signal to others in the social group. Specifically, βconspicuous consumptionβ involves purchasing lavish items to display wealth and social standing.
This behavior is often compared to the peacockβs tail. Just as the bird displays its feathers to attract a mate, men may purchase luxury cars or expensive watches to signal their resourcefulness to potential partners. This is generally considered a strategy for attracting short-term mates. However, this strategy requires a significant investment of resources.
For men in committed relationships, there is a theoretical trade-off between attracting new partners and maintaining their current bond. This is described by sexual selection and parental investment theories. When a female partner is capable of conceiving, the reproductive stakes are at their highest.
During this fertile window, it may be maladaptive for a male to focus his energy on signaling to other women. Doing so could risk his current relationship. Instead, evolutionary logic suggests he should focus on βmate retention.β This involves guarding the relationship and ensuring his investment in potential offspring is secure.
The researchers hypothesized that this shift in focus would manifest in consumer choices. They predicted that men would be less inclined to buy flashier items when their partners were ovulating. To test this, they also looked at the role of oxytocin.
Oxytocin is a neuropeptide produced in the hypothalamus. It is often referred to as the βhormone of loveβ because of its role in social bonding and trust. It facilitates attachment between couples and between parents and children.
The research team included Honghong Tang, Hongyu Fu, Song Su, Luqiong Tong, Yina Ma, and Chao Liu. They are affiliated primarily with Beijing Normal University in China. Their investigation sought to determine if oxytocin reinforces the evolutionary drive to stop signaling status during a partnerβs ovulation.
The investigation began with a preliminary pilot study to categorize consumer products. The team needed to distinguish between items that signal status and items that are merely functional. They presented a list of goods to a group of 110 participants.
These participants rated items based on dimensions such as social status, wealth, and novelty. Based on these ratings, the researchers selected specific βstatus productsβ and βfunctional products.β Status products included items that clearly projected wealth and prestige. Functional products were items of equal utility but without the social signaling component.
The first major experiment, titled Study 1a, involved 373 male participants. All these men were in committed heterosexual relationships. The study was conducted online.
Participants were asked to rate their attitude toward various status and functional products. They indicated how much they liked each item and how likely they were to buy it. Following this task, the men provided detailed information about their partnersβ menstrual cycles.
The researchers categorized the men based on whether their partner was in the menstrual, ovulatory, or luteal phase. The results revealed a distinct pattern. Men whose partners were in the ovulatory phase expressed less interest in status products compared to men in the other groups.
This reduction in preference was specific to status items. The menβs interest in functional products remained stable regardless of their partnerβs cycle phase. This suggests the effect is not a general loss of interest in shopping. Rather, it is a specific withdrawal from status signaling.
To ensure this effect was specific to men, the researchers conducted Study 1b. They recruited 416 women who were also in committed relationships. These participants performed the same rating tasks for the same products.
The women provided data on their own menstrual cycles. The analysis showed no variation in their preference for status products across the month. The researchers concluded that the fluctuation in status consumption is a male-specific phenomenon within the context of heterosexual relationships.
The team then designed Study 2 to investigate the causal role of oxytocin. They recruited 60 healthy heterosexual couples. These couples attended laboratory sessions together.
The experiment used a double-blind, placebo-controlled design. The couples visited the lab twice. One visit was scheduled during the womanβs ovulatory phase, and the other during the menstrual phase.
During these visits, the male participants were given a nasal spray. In one session, the spray contained oxytocin. In the other session, it contained a saline solution. Neither the participants nor the experimenters knew which spray was being administered.
After receiving the treatment, the men rated their preferences for the status and functional products. The researchers also measured the menβs βintuitive inclination.β This trait refers to how much a person relies on gut feelings versus calculated reasoning in decision-making.
The results from the placebo condition replicated the findings from the first study. Men liked status products less when their partners were ovulating. However, the administration of oxytocin amplified this effect.
When men received oxytocin during their partnerβs fertile window, their desire for status products dropped even further. This suggests that oxytocin heightens a manβs sensitivity to his partnerβs reproductive cues. It appears to reinforce the biological imperative to focus on the current relationship.
The study found that this effect was not uniform across all men. It was most pronounced in men who scored high on intuitive inclination. For men who rely heavily on intuition, oxytocin acted as a strong modulator of their consumer preferences.
The authors interpret these findings through the lens of mate-guarding. When a partner is fertile, the maleβs biological priority shifts. He unconsciously moves away from behaviors that attract outside attention.
Instead, he focuses inward on the dyadic bond. Status consumption is effectively a broadcast signal to the mating market. Turning off this signal during ovulation serves to protect the exclusivity of the current pair bond.
There are some limitations to this research that warrant mention. The study relied on participants reporting their βpossibility to buyβ rather than observing actual spending. Peopleβs stated intentions do not always align with their real-world financial behavior.
Additionally, the mechanism by which men detect ovulation is not fully understood. The study assumes men perceive these cues unconsciously. While previous literature suggests men can detect changes in scent or behavior, the current study did not explicitly test for this detection.
The study focused solely on couples in committed relationships. It remains to be seen how single men might respond to similar hormonal or environmental cues. It is possible that the presence of a committed partner is required to trigger this specific suppression of status seeking.
Future research could address these gaps by analyzing real-world consumer data. Comparing purchasing patterns of single men versus committed men would also provide greater clarity. Additionally, measuring oxytocin levels naturally occurring in the blood could validate the findings from the nasal spray experiment.
Despite these caveats, the research offers a new perspective on the biological underpinnings of economic behavior. It challenges the view of consumption as a purely social or rational choice. Instead, it highlights the role of ancient reproductive strategies in modern shopping aisles.
The findings indicate that marketing strategies might affect consumers differently depending on their biological context. Men in relationships may be less responsive to status-based advertising at certain times of the month. Conversely, campaigns focusing on relationship solidity might be more effective during those same windows.
This study adds to a growing body of work linking physiology to psychology. It demonstrates that the drive to reproduce and protect offspring continues to shape human behavior in subtle ways. Even the decision to buy a luxury watch may be influenced by the invisible tick of a partnerβs biological clock.
The study, βModulation of strategic status signaling: oxytocin changes menβs fluctuations of status products preferences in their female partnersβ menstrual cycle,β was authored by Honghong Tang, Hongyu Fu, Song Su, Luqiong Tong, Yina Ma, and Chao Liu.

The first time this has ever been seen.



A new pilot study suggests that engaging in indoor hydroponic gardening can improve mental well-being and quality of life for adults undergoing cancer treatment. The findings indicate that this accessible form of nature-based intervention offers a practical strategy for reducing depression and boosting emotional functioning in patients. These results were published in Frontiers in Public Health.
Cancer imposes a heavy burden that extends far beyond physical symptoms. Patients frequently encounter severe psychological and behavioral challenges during their treatment journeys. Depression is a particularly common issue and affects approximately one in four cancer patients in the United States. This mental health struggle can complicate recovery by reducing a patientβs ability to make informed decisions or adhere to treatment plans. Evidence suggests that depression is linked to higher risks of cancer recurrence and mortality.
Pain is another pervasive symptom that is closely tied to emotional health. The perception of pain often worsens when a patient is experiencing high levels of stress or anxiety. These combined factors can severely diminish a patientβs health-related quality of life. They can limit social interactions and delay the return to normal daily activities.
Medical professionals are increasingly interested in βsocial prescribingβ to address these holistic needs. This approach involves recommending non-clinical services, such as art or nature therapies, to support overall health. Gardening is a well-established social prescription known to alleviate stress and improve mood. Traditional gardening provides moderate physical activity and contact with nature, which are both beneficial.
However, outdoor gardening is not always feasible for cancer patients. Physical limitations, fatigue, and compromised immune systems can make outdoor labor difficult. Urban living arrangements often lack the necessary space for a garden. Additionally, weather conditions and seasonal changes restrict when outdoor gardening can occur.
Researchers sought to determine if hydroponic gardening could serve as an effective alternative. Hydroponics is a method of growing plants without soil. It uses mineral nutrient solutions in an aqueous solvent. This technique allows for cultivation in small, controlled indoor environments. It eliminates many barriers associated with traditional gardening, such as the need for a yard, exposure to insects, or physically demanding digging.
βCancer patients often struggle with depression, stress, and reduced quality of life during treatment, yet many supportive care options are difficult to implement consistently,β explained study author Taehyun Roh, an assistant professor at Texas A&M University.
βTraditional gardening has well-documented mental health benefits, but it requires outdoor space, physical ability, and favorable weatherβconditions that many patients simply do not have. We saw a clear gap: no one had tested whether a fully indoor, low-maintenance gardening method like hydroponics could offer similar benefits. Our goal was to explore whether bringing nature into the home in a simple, accessible way could meaningfully improve patientsβ wellbeing.β
The study aimed to evaluate the feasibility and psychological impact of this specific intervention. The researchers employed a case-crossover design for this pilot study. This means that the participants served as their own controls. The investigators compared data collected during the intervention to the participantsβ baseline status rather than comparing them to a separate group of people.
The research team recruited 36 adult participants from the Houston Methodist Cancer Center. The group had an average age of 57.5 years. The cohort was diverse and included individuals with various types and stages of cancer. To be eligible, participants had to have completed at least one cycle of chemotherapy. They also needed to be on specific infusion therapy cycles to align with the data collection schedule.
At the beginning of the study, each participant received an AeroGarden hydroponic system. This device is a countertop appliance designed for ease of use. It includes a water reservoir, an LED grow light, and liquid plant nutrients. The researchers provided seed kits for heirloom salad greens. Participants were tasked with setting up the system and caring for the plants over an eight-week period.
The intervention required participants to maintain the water levels and add nutrients periodically. The LED lights operated on an automated schedule to ensure optimal growth. Participants grew the plants from seeds to harvest. The researchers provided manuals and troubleshooting guides to assist those with no prior gardening experience.
To measure the effects of the intervention, the team administered a series of validated surveys at three time points. Data collection occurred at the start of the study, at four weeks, and at eight weeks. Mental well-being was assessed using the Warwick-Edinburgh Mental Wellbeing Scale. This instrument focuses on positive aspects of mental health, such as optimism and clear thinking.
The researchers measured mental distress using the Depression, Anxiety, and Stress Scale. This tool breaks down negative emotional states into three distinct subscales. Quality of life was evaluated using a questionnaire developed by the European Organization for Research and Treatment of Cancer. This comprehensive survey covers physical, role, cognitive, emotional, and social functioning.
In addition to psychological measures, the study tracked dietary habits. The researchers used a module from the Behavioral Risk Factor Surveillance System to record fruit and vegetable intake. They also assessed pain severity and its interference with daily life using the Short-Form Brief Pain Inventory.
The analysis of the data revealed several positive outcomes over the eight-week period. The most consistent improvement was seen in mental well-being scores. The average score on the Warwick-Edinburgh scale increased by 3.8 points. This magnitude of change is significant because it exceeds the threshold that clinicians typically view as meaningful.
Depression scores showed a statistically significant downward trend. By the end of the study, participants reported fewer depressive symptoms compared to their baseline levels. This reduction suggests that the daily routine of tending to plants helped alleviate feelings of despondency.
The researchers also found improvements in overall quality of life. The participants reported better emotional functioning, meaning they felt less tense or irritable. Social functioning scores also rose significantly. This indicates that participants felt less isolated and more capable of interacting with family and friends.
Physical symptoms showed some favorable changes as well. Participants reported a significant reduction in appetite loss. This is a common and distressing side effect of cancer treatment. As appetite improved, so did dietary behaviors. The frequency of vegetable consumption increased over the course of the study. Specifically, the intake of dark green leafy vegetables and whole fruits went up significantly.
βWe were surprised by how quickly participants began experiencing benefits,β Roh told PsyPost. βPositive changes in wellbeing and quality of life were already visible at four weeks. Many participants also reported enjoying the sense of routine and accomplishment that came with caring for their plantsβsomething that was not directly measured but came up frequently in conversations.β
The researchers also observed a decreasing trend in pain management scores. However, these particular changes did not reach statistical significance. It is possible that the sample size was too small to detect a definitive effect on pain.
The mechanisms behind these benefits likely involve both physiological and psychological processes. Interacting with plants is thought to activate the parasympathetic nervous system. This system is responsible for the bodyβs βrest and digestβ functions. Activation leads to reduced heart rate and lower stress levels.
Psychologically, the act of nurturing a living organism provides a sense of purpose. Cancer treatment often strips patients of their autonomy and control. Growing a garden restores a small but meaningful degree of agency. The participants witnessed the tangible results of their care as the plants grew. This success likely reinforced their feelings of self-efficacy.
The study also highlights the potential of βbiophiliaβ in a clinical context. This concept suggests that humans have an innate tendency to seek connections with nature. Even a small indoor device appears to satisfy this need enough to provide therapeutic value. The multisensory engagement of seeing green leaves and handling the plants may promote mindfulness.
βEven a small, indoor hydroponic garden can make a noticeable difference in mental wellbeing, mood, and quality of life for people undergoing cancer treatment,β Roh said. βHydroponic gardening also makes the benefits of gardening accessible to nearly anyoneβeven older adults, people with disabilities, individuals with limited mobility, or those living without outdoor space.β
βBecause it can be done indoors in any season, it removes barriers related to climate, weather, and physical limitations. You donβt need a yard or gardening experience to benefitβsimply caring for plants at home can boost mood and encourage healthier habits.β
Despite the positive findings, the study has some limitations. The sample size of 36 patients is relatively small. This limits the ability to generalize the results to the broader cancer population. The lack of a separate control group is another constraint. Without a control group, it is difficult to say with certainty that the gardening caused the improvements. Other factors could have contributed to the changes over time. Additionally, the study lasted only eight weeks. It remains unclear if the mental health benefits would persist after the intervention ends.
βThis was a pilot study with no control group, and it was designed to test feasibility rather than establish causation,β Roh explained. βThe improvements we observed are encouraging, but they should not be interpreted as proof that hydroponic gardening directly causes better mental health outcomes. Larger, controlled studies are needed to confirm and expand on these findings.β
βOur next step is to conduct a larger, randomized controlled trial with longer follow-up to examine sustained effects and understand which patient groups benefit most. We also hope to integrate objective engagement measuresβsuch as plant growth tracking or digital activity logsβto complement self-reported data. Ultimately, we aim to develop a scalable, evidence-based gardening program that can be offered widely in cancer centers and community health settings.β
βPatients repeatedly told us that caring for their plants gave them something to look forward toβa small but meaningful source of joy and control during treatment,β Roh added. βThat human element is at the heart of this work. Our hope is that hydroponic gardening can become a simple, accessible tool for improving wellbeing not only in cancer care, but also in communities with limited access to nature.β
The study, βIndoor hydroponic vegetable gardening to improve mental health and quality of life in cancer patients: a pilot study,β was authored by Taehyun Roh, Laura Ashley Verzwyvelt, Anisha Aggarwal, Raj Satkunasivam, Nishat Tasnim Hasan, Nusrat Fahmida Trisha, and Charles Hall.




"I obviously wasn't aware of the dangers."




A profile of this mysterious condition is emerging.
























































