Few developments in recent memory have generated as much excitement — and as much anxiety — within the neuroscience and psychiatry communities as the rapid emergence of artificial intelligence as a tool for clinical decision-making, neuroimaging analysis, and mental health service delivery. As the Journal of Global Health Neurology and Psychiatry opens its third volume, we have chosen to devote this editorial to what is arguably the defining technological conversation in our field: the promise, the peril, and — above all — the global equity implications of AI in brain health care.

This is not a topic we approach lightly or uncritically. The history of technological “solutions” in global health is littered with innovations that were developed in high-income settings, validated on non-representative populations, and deployed in low- and middle-income countries (LMICs) with insufficient attention to local context, infrastructure, or the communities most affected. AI in mental health and neurology risks repeating this pattern at scale — and at speed — unless the global health community intervenes deliberately and now.

The State of AI in Neurology and Psychiatry

The pace of AI development in neurological and psychiatric medicine has been remarkable. Machine learning algorithms have demonstrated performance comparable to or exceeding that of specialist clinicians in specific tasks: the detection of diabetic retinopathy, the interpretation of electroencephalograms, the segmentation of brain tumours on MRI, and the early identification of Alzheimer’s disease from neuroimaging data.1 In psychiatry, natural language processing tools have shown the capacity to detect linguistic markers of psychosis, depression, and suicide risk in free text and speech with a sensitivity that challenges traditional clinical assessment.2

The therapeutic frontier has advanced equally rapidly. Conversational AI platforms delivering cognitive behavioural therapy-informed interventions have been evaluated in randomised trials, with some demonstrating reductions in symptoms of depression and anxiety that, while modest, are statistically significant and scalable to populations that would otherwise receive no care at all.3 The launch of large language model-based applications with apparent counselling capabilities — reaching hundreds of millions of users worldwide — has further accelerated a deployment curve that has outpaced the evidence base supporting it.

The Equity Problem at the Heart of AI in Global Brain Health

The scientific achievements summarised above are genuine and deserve recognition. But they cannot be evaluated in isolation from the conditions under which AI systems are built and the populations on which they are trained. The overwhelming majority of the neuroimaging datasets, electronic health records, and linguistic corpora underpinning current AI systems in neurology and psychiatry derive from patients in North America, Western Europe, and East Asia. The implications of this are profound.

Neurological disease presentation varies across populations. The prevalence of specific epilepsy syndromes, the vascular risk factor profiles driving stroke, the genetic architecture of neurodegenerative disease — all differ in ways that matter for algorithmic performance. An AI system trained predominantly on data from one population cannot be assumed to perform equivalently when deployed in another. A landmark review published in 2023 found that fewer than 5% of studies developing AI tools for psychiatric diagnosis included datasets from LMICs, and that performance metrics reported in validation studies were rarely disaggregated by ethnicity, income level, or geographic region.4

The linguistic dimension compounds this problem. Mental health assessment is inherently language-dependent: the validity of natural language processing tools trained on English-language corpora cannot be assumed to transfer to the hundreds of languages spoken by the populations bearing the greatest global burden of psychiatric disease. Cultural idioms of distress — the ways in which psychological suffering is expressed and understood across different societies — are not captured by training datasets built on Western clinical records.5

Algorithmic Bias and the Amplification of Existing Inequities

The problem of algorithmic bias in medicine has been extensively documented in domains such as dermatology and cardiology, where systems trained on non-representative data have demonstrated systematically inferior performance for patients of colour and for women. Psychiatry and neurology are not exempt from this risk. Diagnostic algorithms that reflect the biases embedded in historical clinical data — biases that have, for example, resulted in the systematic underdiagnosis of autism in girls and in ethnic minority populations — may perpetuate and entrench those same inequities at scale.6

Particularly concerning is the deployment of AI-based suicide risk prediction tools within health systems that lack the safeguards, follow-up capacity, and cultural competence to act appropriately on their outputs. A high-sensitivity alert system is of limited value if the clinician receiving the alert has no pathway to provide timely support, or if the risk model was validated on a population whose relationship to suicidal ideation, help-seeking, and disclosure differs substantially from the patient in front of them.

What Responsible AI in Global Brain Health Requires

None of the above argues for a moratorium on AI development in neurology and psychiatry. The potential to extend the reach of specialist expertise to populations with a critical shortage of neurologists and psychiatrists — a shortage that is most severe in precisely the LMICs where disease burden is highest — is too significant to forgo. What it argues for is a set of principled commitments that the research community, journal editors, funding agencies, and technology developers must collectively honour.

First, AI tools intended for global deployment must be developed with global data from the outset, not retrofitted with LMIC validation studies after the fact. This requires investment in data infrastructure, ethical frameworks for data governance, and genuine co-development partnerships with researchers and communities in LMICs. Second, reporting standards for AI studies in our field must require disaggregated performance metrics across demographic and geographic subgroups — a standard that, as recently as 2023, remained inconsistently applied even in leading journals.7

Third, and perhaps most fundamentally, the communities most affected by neurological and psychiatric disorders in LMICs must be participants in — not merely subjects of — the AI systems designed to serve them. This means involving patients, families, community health workers, and local clinicians in the design, validation, and governance of these tools. It means asking not only “does this algorithm work?” but “for whom does it work, under what conditions, and at whose direction?”

The Journal’s Commitment

The Journal of Global Health Neurology and Psychiatry is committed to being a rigorous and equitable venue for AI research in our field. We will prioritise manuscripts that address the global equity dimensions of AI in brain health, that report disaggregated performance data, and that involve co-investigators from the settings in which tools are intended to be deployed. We will apply heightened scrutiny to studies that report AI performance metrics without adequate attention to external validity and generalisability across populations.

The digital transformation of mental health and neurological care is already underway. The question before us is not whether AI will reshape the landscape of global brain health, but whether that reshaping will reduce or deepen the inequities that have long defined it. The answer to that question will depend on choices made now — in research design, in editorial policy, in funding priorities, and in the values we collectively bring to one of the most consequential technological transitions in the history of medicine.