An analysis of the China Health and Retirement Longitudinal Study data found that individuals with more severe depressive symptoms tend to report higher levels of social isolation at a later time point. In turn, individuals who are more socially isolated tend to report slightly worse cognitive functioning. Analyses showed that social isolation mediates a small part of the link between depressive symptoms and worse cognitive functioning. The paper was published in the Journal of Affective Disorders.
Depression is a mental health disorder characterized by persistent sadness, loss of interest or pleasure, and feelings of hopelessness that interfere with daily functioning. It adversely affects the way a person thinks, feels, and behaves. It can lead to difficulties in work, relationships, and self-care.
People with depression may experience fatigue, changes in appetite, and sleep disturbances. Concentration and decision-making can become harder, reducing productivity and motivation. Physical symptoms such as pain, headaches, or digestive issues may also appear without clear medical causes.
Depression can diminish the ability to enjoy previously pleasurable activities, leading to social withdrawal. This isolation can worsen depressive symptoms, creating a cycle of loneliness and despair. Social isolation itself is both a risk factor for developing depression and a common consequence of it.
Study author Jia Fang and her colleagues note that depressed individuals also tend to show worse cognitive functioning. They conducted a study aiming to explore the likely causal direction underpinning the longitudinal association between depressive symptoms and cognitive decline, and a possible mediating role social isolation has in this link among Chinese adults aged 45 years and above. These authors hypothesized that social isolation mediates the association between depressive symptoms and cognitive function.
Study authors analyzed data from the China Health and Retirement Longitudinal Study (CHARLS). CHARLS is a nationally representative longitudinal survey of Chinese residents aged 45 and above. This analysis used CHARLS data from three waves in 2013, 2015, and 2018, including a total of 9,220 participants. 51.4% were women. Participants’ average age was 58 years.
The authors of the study used data on participants’ depressive symptoms (the 10-item Center for Epidemiologic Studies Depression Scale), social isolation, and cognitive function (assessed with tests of contextual memory and mental integrity). A social isolation score was calculated based on four factors: being unmarried (single, separated, divorced, or widowed), living alone, having less than weekly contact with children (in person, via phone, or email), and not participating in any social activities in the past month.
Results showed that depressive symptoms were associated with subsequent social isolation. Social isolation, in turn, was associated with subsequent worse cognitive functioning. Further analyses showed that social isolation partially mediated the link between depressive symptoms and cognitive functioning, explaining 3.1% of the total effect.
The study authors concluded that the association between depressive symptoms and cognitive function is partially mediated by social isolation. They suggest that public health initiatives targeting depressive symptoms in older adults could reduce social isolation and help maintain cognitive health in middle-aged and older adults in China.
The study sheds light on the nature of the link between depressive symptoms and cognitive functioning. However, it should be noted that the design of the study does not allow definitive causal inferences to be derived from these results. Additionally, social isolation was assessed through self-reports, leaving room for reporting bias to have affected the results. Finally, the reported mediation effect was very modest in size, indicating that the link between depression and cognitive functioning depends much more on factors other than social isolation.
Scientists used two high-speed cameras recording at 1,000 frames per second to capture the lightning fast action of a venomous snake bite. The snakes can react in less than 100 milliseconds.
A new study provides evidence that the human brain constructs our seamless experience of the world by first breaking it down into separate predictive models. These distinct models, which forecast different aspects of reality like context, people’s intentions, and potential actions, are then unified in a central hub to create our coherent, ongoing subjective experience. The research was published in the journal Nature Communications.
The scientists behind the new study proposed that our world model is fragmented into at least three core domains. The first is a “State” model, which represents the abstract context or situation we are in. The second is an “Agent” model, which handles our understanding of other people, their beliefs, their goals, and their perspectives. The third is an “Action” model, which predicts the flow of events and possible paths through a situation.
“There’s a long-held tradition, and with good evidence that the mind is composed of many, different modules specialized for distinct computations. This is obvious in perception with modules dedicated to faces and places. This is not obvious in higher-order, more abstract domains which drives our subjective experience. The problem with this is non-trivial. If it does have multiple modules, how can we have our experience seemingly unified?” explained study author Fahd Yazin, a medical doctor who’s currently a doctoral candidate at the University of Edinburgh.
“In learning theories, there are distinct computations needed to form what is called a world model. We need to infer from sensory observations what state we are in (context). For e.g. if you go to a coffee shop, the state is that you’re about to get a coffee. But if you find that the machine is out-of- order, then the current state is you’re not going to get it. Similarly, you need to have a frame of reference (frame) to put these states in. For instance, you want to go to the next shop but your friend had a bad experience there previously, you need to take their perspective (or frame) into account. You possibly had a plan of getting a coffee and chat, but now you’re willing to adapt a new plan (action transitions) of getting a matcha drink instead.”
“You’re able to do all these things in a deceptively simple way because various modules can coordinate their output, or predictions together. And switch between various predictions effortlessly. So, if we disrupt their ongoing predictions in a natural and targeted way, you can get two things. The brain regions dedicated to these predictions, and how they influence our subjective experience.”
To explore this, the research team conducted a series of experiments using functional magnetic resonance imaging, a technique that measures brain activity by detecting changes in blood flow. In the main experiment, a group of 111 young adults watched an eight-minute suspenseful excerpt from an Alfred Hitchcock film, “Bang! You’re Dead!” while inside a scanner. They were given no specific instructions other than to watch the movie, allowing the scientists to observe brain activity during a naturalistic experience.
To understand when participants’ predictions were being challenged and updated, the researchers collected data from separate groups of people who watched the same film online. These participants were asked to press a key whenever their understanding of the movie’s context (State), a character’s beliefs (Agent), or the likely course of events (Action) suddenly changed. By combining the responses from many individuals, the scientists created timelines showing the precise moments when each type of belief was most likely to be updated.
Analyzing the brain scans from the movie-watching group, the scientists found a clear division of labor in the midline prefrontal cortex, a brain area associated with higher-level thought. When the online raters indicated a change in the movie’s context, the ventromedial prefrontal cortex became more active in the scanned participants. When a character’s perspective or intentions became clearer, the anteromedial prefrontal cortex showed more activity. And when the plot took a turn that changed the likely sequence of future events, the dorsomedial prefrontal cortex was engaged.
The researchers also found that these moments of belief updating corresponded to significant shifts in the brain’s underlying neural patterns. Using a computational method called a Hidden Markov Model, they identified moments when the stable patterns of activity in each prefrontal region abruptly transitioned. These neural transitions in the ventromedial prefrontal cortex aligned closely with updates to “State” beliefs.
Similarly, transitions in the anteromedial prefrontal cortex coincided with “Agent” updates, and those in the dorsomedial prefrontal cortex matched “Action” updates. This provides evidence that when our predictions about the world are proven wrong, it triggers not just a momentary spike in activity, but a more sustained shift in the neural processing of that specific brain region.
Having established that predictions are handled by separate modules, the researchers next sought to identify where these fragmented predictions come together. They focused on the precuneus, a region located toward the back of the brain that is known to be a major hub within the default mode network, a large-scale brain network involved in internal thought.
By analyzing the functional connectivity, or the degree to which different brain regions activate in sync, they found that during belief updates, each specialized prefrontal region showed increased communication with the precuneus. This suggests the precuneus acts as an integration center, receiving the updated information from each predictive module.
To further investigate this integration, the team examined the similarity of multivoxel activity patterns between brain regions. They discovered a dynamic process they call “multithreaded integration.” When participants’ beliefs about the movie’s context were being updated, the activity patterns in the precuneus became more similar to the patterns in the “State” region of the prefrontal cortex.
When beliefs about characters were changing, the precuneus’s patterns aligned more with the “Agent” region. This indicates that the precuneus flexibly syncs up with whichever predictive module is most relevant at a given moment, effectively weaving the separate threads of prediction into a single, coherent representation.
The scientists then connected this integration process to subjective experience. Using separate ratings of emotional arousal, a measure of how engaged and immersed viewers were in the film, they found that the activity of the precuneus closely tracked the emotional ups and downs of the movie. The individual prefrontal regions did not show this strong relationship.
What’s more, individuals whose brains showed stronger integration between the prefrontal cortex and the precuneus also had more similar overall brain responses to the movie. This suggests that the way our brain integrates these fragmented predictions directly shapes our shared subjective reality.
“At any given time, multiple predictions may compete or coexist, and our experience can shift depending on which predictions are integrated that best align with reality,” Yazin told PsyPost. “People whose brains make and integrate predictions in similar ways are likely to have more similar experiences, while differences in prediction patterns may explain why individuals perceive the same reality differently. This approach provides new insight into how shared realities and personal differences arise, offering a framework for understanding human cognition.”
To confirm these findings were not specific to one movie or to visual information, the team replicated the key analyses using a different dataset where participants listened to a humorous spoken-word story. They found the same modular system in the prefrontal cortex and the same integrative role for the Precuneus, demonstrating that this is a general mechanism for how the brain models the world, regardless of the sensory input.
“We replicated the main findings across a different cohort, sensory modality and emotional content (stimuli), making these findings robust to idiosyncratic factors,” Yazin said. “These results were observed when people were experiencing stimuli (movie/story) in a completely uninterrupted and uninstructed manner, meaning our experience is continuously rebuilt and adapted into a coherent unified stream despite it originating in a fragmented manner.”
“Our experience is not just a simple passive product of our sensory reality. It is actively driven by our predictions. And these come in different flavors; about our contexts we find ourselves in, about other people and about our plans of the immediate future. Each of these gets updated as the sensory reality agrees (or disagrees) with our predictions. And integrates with that reality to form our ‘current’ experience.”
“We have multiple such predictions internally, and at any given time our experience can toggle between these depending on how the reality fits them,” Yazin explained. “In other words, our original experience is a product of fragmented and distributed predictions integrated into a unified whole. And people with similar way of predicting and integrating, would have similar experiences from the reality than people who are dissimilar.”
“More importantly, it brings the default mode network, a core network in the human brain into the table as a central network driving our core phenomenal experience. It’s widely implicated in learning, inference, imagination, memory recall and in dysfunctions to these. Our results offer a framework to fractionate this network by computations of its core components.”
But as with all research, the study has some limitations. The analysis is correlational, meaning it shows associations between brain activity and belief updates but cannot definitively prove causation. Also, because the researchers used naturalistic stories, the different types of updates were not always completely independent; a single plot twist could sometimes cause a viewer to update their understanding of the context, a character, and the future plot all at once.
Still, the consistency of the findings across two very different naturalistic experiences provides strong support for a new model of human cognition. “Watching a suspenseful movie and listening to a comedic story feels like two very different experience but the fact that they have similar underlying regions with similar specialized processes for generating predictions was counterintuitive,” Yazin told PsyPost. “And that we could observe it in this data was something unexpected.”
Future research will use more controlled, artificially generated stimuli to better isolate the computations happening within each module.
“We’re currently exploring the nature of these computations in more depth,” Yazin said. “In naturalistic stimuli as we’ve used now, it is impossible to fully separate domains (the contributions of people and contexts are intertwined in such settings). It brings richness but you lose experimental control. Similarly, the fact that these prefrontal regions were sensitive regardless of content and sensory information means there is possibly an invariant computation going on within them. We’re currently investigating these using controlled stimuli and probabilistic models to answer these questions.”
“For the last decade or so, there’s been two cultures in cognitive neuroscience,” he added. “One is using highly controlled stimuli, and leveraging stimulus properties to ascertain regional involvement to that function to various degrees. Second is using full-on naturalistic stimuli (movies, narratives, games) to understand how humans experience the world with more ecological accuracy. Each has brought unique and incomparable insights.”
“We feel studies on subjective experience/phenomenal consciousness has focused more on the former because it is easier to control (perceptual features/changes), but there’s a rich tradition and methods in the latter school that may help uncover more intractable problems in novel ways. Episodic ,emory and semantic processing are two great examples of this, where using naturalistic stimuli opened up connections and findings that were completely new to each of those fields.”
A new scientific analysis has uncovered a direct genetic link between higher cognitive function in childhood and a longer lifespan. The findings suggest that some of the same genetic factors influencing a child’s intelligence are also associated with how long they will live. This research, published in the peer-reviewed journal Genomic Psychiatry, offers the first molecular evidence connecting childhood intellect and longevity through shared genetic foundations.
For many years, scientists in a field known as cognitive epidemiology have observed a consistent pattern: children who score higher on intelligence tests tend to live longer. A major review of this phenomenon, which analyzed data from over one million people, found that for a standard increase in cognitive test scores in youth, there was a 24 percent lower risk of death over several decades. The reasons for this connection have long been a subject of debate, with questions about whether it was due to lifestyle, socioeconomic status, or some underlying biological factor.
Previous genetic studies have identified an association between cognitive function in adults and longevity. A problem with using adult data, however, is the possibility of reverse causation. Poor health in later life can negatively affect a person’s cognitive abilities and simultaneously shorten their life. This makes it difficult to determine if genes are linking intelligence to longevity, or if later-life health issues are simply confounding the results by impacting both traits at the same time.
To overcome this challenge, a team of researchers led by W. David Hill at the University of Edinburgh sought to examine the genetic relationship using intelligence data from childhood, long before adult health problems could become a complicating factor. Their goal was to see if the well-documented association between youthful intelligence and a long life had a basis in shared genetics. This approach would provide a cleaner look at any potential biological connections between the two traits.
The researchers did not collect new biological samples or test individuals directly. Instead, they performed a sophisticated statistical analysis of data from two very large existing genetic databases. They used summary results from a genome-wide association study on childhood cognitive function, which contained genetic information from 12,441 individuals. This type of study scans the entire genetic code of many people to find tiny variations associated with a particular trait.
They then took this information and compared it to data from another genome-wide association study focused on longevity. This second dataset was much larger, containing genetic information related to the lifespan of the parents of 389,166 people. By applying a technique called linkage disequilibrium score regression, the scientists were able to estimate the extent to which the same genetic variants were associated with both childhood intelligence and a long life.
The analysis revealed a positive and statistically significant genetic correlation between childhood cognitive function and parental longevity. The correlation estimate was 0.35, which indicates a moderate overlap in the genetic influences on both traits. This result provides strong evidence that the connection between being a brighter child and living a longer life is, at least in part, explained by a shared genetic architecture. The same genes that contribute to higher intelligence in youth appear to also contribute to a longer lifespan.
The researchers explain that this shared genetic influence, a concept known as pleiotropy, could operate in a few different ways. The presence of a genetic correlation is consistent with multiple biological models, and the methods used in this study cannot definitively separate them. One possible explanation falls under a model of horizontal pleiotropy, where a set of genes independently affects both brain development and bodily health.
This idea supports what some scientists call the “system integrity” hypothesis. According to this view, certain genetic makeups produce a human system, both brain and body, that is inherently more robust. Such a system would be better at withstanding environmental challenges and the wear and tear of aging, leading to both better cognitive performance and greater longevity.
Another possibility is a model of vertical pleiotropy. In this scenario, the genetic link is more like a causal chain of events. Genes primarily influence childhood cognitive function. Higher cognitive function then enables individuals to make choices and navigate environments that are more conducive to good health and a long life. For example, higher intelligence is linked to achieving more education, which in turn is associated with better occupations, greater health literacy, and healthier behaviors, all of which promote longevity.
A limitation of this work is its inability to distinguish between these different potential mechanisms. The study confirms that a genetic overlap exists, but it does not tell us exactly how that overlap functions biologically. The research identifies an average shared genetic effect across the genome. It does not provide information about which specific genes or biological pathways are responsible for this link. Additional work is needed to identify the precise regions of the genome that drive this genetic correlation between early-life cognitive function and how long a person lives.