Stanford University has launched an important new initiative on AI for mental health, AI4MH, doing … More
In today’s column, I continue my ongoing coverage of the latest trends in AI for mental health by highlighting a new initiative at Stanford University, known aptly as AI4MH, undertaken by Stanford’s esteemed Department of Psychiatry and Behavioral Sciences in the School of Medicine.
Their inaugural launch of AI4MH took place on April 15, 2025, and luminary Dr. Tom Insel, M.D., famed psychiatrist and neuroscientist, served as the kick-off speaker. Dr. Insel is renowned for his outstanding work in mental health research and technology, and served as the Director of the National Institute of Mental Health (NIMH). He is also known for having founded several companies that innovatively integrate high-tech into mental health care.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
The Growing Realm Of AI And Mental Health
Readers familiar with my coverage on AI for mental health might recall that I’ve closely examined and reviewed a myriad of important aspects underlying this rapidly evolving topic, doing so in over one hundred of my column postings.
This includes analyzing the latest notable research papers and avidly assessing the practical utility of apps and chatbots employing generative AI and large language models (LLMs) for performing mental health therapy. I have spoken about those advances, such as during an appearance on a CBS 60 Minutes episode last year, and compiled the analyses into two popular books depicting the disruption and transformation that AI is having on mental health care.
It was with great optimism that I share here the new initiative at the Stanford School of Medicine on AI4MH and fully anticipate that this program will provide yet another crucial step in identifying where AI for mental health is heading and the impacts on society all told. Per the mission statement articulated for AI4MH:
- “AI4MH aims to transform research, diagnosis, and treatment of psychiatric & behavioral disorders by creating and using responsible AI. To achieve this vision, we create AI tools tailored towards psychiatric applications, facilitate their use within the department, foster interdisciplinary collaborations, and provide cutting-edge knowledge” (source: official website for Stanford’s AI4MH, see the link here).
Thanks go to the organizers of the AI4MH that I met at the inaugural event, including Dr. Kilian Pohl, Professor of Psychiatry and Behavioral Sciences (Major Labs and Incubator), Ehsan Adeli, Assistant Professor of Psychiatry and Behavioral Sciences (Public Mental Health and Populations Sciences), and Carolyn Rodriguez, Professor of Psychiatry and Behavioral Sciences (Public Mental Health and Population Sciences), and others, for their astute vision and resolute passion on getting this vital initiative underway.
Keynote Talk Sets The Stage
During his talk, Dr. Insel carefully set the stage, depicting the current state of AI for mental health care and insightfully exploring where the dynamic field is heading. His remarks established a significant point that I’ve been repeatedly urging, namely that our existing approach to mental health care is woefully inadequate and that we need to rethink and reformulate what is currently being done.
The need, or shall we say, the growing demand for mental health care is astronomical, yet the available and accessible supply of quality therapists and mental health advisors is far too insufficient in numerous respects.
I relished that this intuitive sense of the mounting issue was turned into a codified and well-structured set of five major factors by Dr. Insel:
- (1) Diagnosis
- (2) Engagement
- (3) Capacity
- (4) Quality
- (5) Accountability
I’ll recap the semblance of those essential factors.
The Five Factors Explained
Starting with diagnosis as a key factor, it is perhaps surprising to some to discover that the diagnosis of mental health is a lot more loosey-goosey than might be otherwise assumed. The layperson tends to assume that a precise and fully calculable means exists to produce a mental health diagnosis to an ironclad nth degree. This is not the case. If you peruse the DSM-5 standard guidebook, you’ll quickly realize that there is a lot of latitude and imprecision underpinning the act of diagnosis. The upshot is that there is a lack of clarity when it comes to undertaking a diagnosis, and we need to recognize that this is a serious problem that requires much more rigor and reliability.
For my detailed look at the DSM-5 and how generative AI leans into the guidebook contents while performing AI-based mental health diagnoses, see the link here.
The second key factor entails engagement.
The deal is this. People needing or desiring mental health care are often unable to readily gain access to mental health care resources. This can be due to cost, logistics, and a litany of economic and supply/demand considerations. Dr. Insel noted a statistic that perhaps 60% of those potentially benefiting from therapy aren’t receiving mental health care, and thus, a sizable proportion of people aren’t getting needed help. That’s a problem that deserves close scrutiny and outside-the-box thinking to resolve.
A related factor is capacity, the third of the five listed.
We don’t have enough therapists and mental health professionals, along with related facilities, to meet the existing and growing needs for mental health care. In the United States, for example, various published counts suggest there are approximately 200,000 therapists and perhaps 100,000 psychologists, supporting a population of nearly 350 million people. That ratio won’t cut it, and indeed, studies indicate that practicing mental health care professionals are overworked, highly stressed out, and unable to readily manage workloads that at times can riskily compromise quality of care.
For my coverage of how therapists are using AI as a means of augmenting their practice, allowing them to focus more on their clients and sensibly cope with the heightened workloads, see the link here.
The fourth factor is quality.
You can plainly see from the other factors how quality can be insidiously undercut. If a therapist is tight for time and trying to see as many patients as possible, seeking to maximize their mental health care for as many people as possible, the odds of quality taking a hit are relatively obvious. Overall, even with the best of intentions, quality is frequently fragmented and episodic. There is also a kind of reactive quality phenomenon, whereby after realizing that quality is suffering, a short-term boost in quality occurs, but this soon fizzles out, and the rest of the constraining infrastructure magnetically pulls back to the somewhat haphazard quality levels.
For my analysis of how AI can be used to improve quality when it comes to mental health care, see the link here.
Accountability is the fifth factor.
There’s a famous quote attributed to the legendary management guru Peter Drucker that what gets measured gets managed. The corollary to that wisdom is that what doesn’t get measured is bound to be poorly managed. The same holds true for mental health care. By and large, there is sparse data on the outcomes associated with mental health therapy. Worse still, perhaps, the adoption of evidence-based mental health care is thin and leaves us in the dark about the big picture associated with the efficacy of therapy.
For my discussion about AI as a means of collecting mental health data and spurring evidence-based care, see the link here and the link here.
Bringing AI Into The Picture
The talk openly helped to clarify that we pretty much have a broken system when it comes to mental health care today, and that if we don’t do something at scale about it, the prognosis is that things will get even worse.
A tsunami of mental health needs is heading towards us. The mental health therapy flotilla currently afloat is not prepared to handle it and is barely keeping above water as is.
What can be done?
One of a slew of intertwined opportunities includes the use of modern-day AI.
The advent of advanced generative AI and LLMs has already markedly impacted mental health advisement across the board. People are consulting daily with generative AI on mental health questions. Recent studies, such as one included in the Harvard Business Review, indicate that the #1 use of generative AI is now for therapy-related advice (I’ll be covering that in an upcoming post, please stay tuned).
We don’t yet have tight figures on how widespread the use of generative AI for mental health purposes is, but in my exploration of population-level facets, we know that there are for example 400 million weekly active users of ChatGPT, and likely several hundred million other users associated with Anthropic Claude, Google Gemini, Meta Llama, etc. Estimates of the proportion that might be using the AI for mental health insights are worth considering, and I identify various means at the link here.
Why AI Is So Popular For This
It makes abundant sense that people would turn to generative AI for mental health facets. Most of the generative AI apps are free to use, tend to be available 24/7, and can be utilized just about anywhere on Earth. You can create an account in minutes and immediately start conversing on a wide range of mental health aspects.
Contrast those ease-of-use characteristics to having to find and use a human therapist. First, you need to find a therapist and determine whether they seem suitable to your preferences. Next, you need to set up an agreement for services, schedule to converse with the therapist, deal with constraints on when the therapist is available, financially handle the costs of using the therapist, and so on. There is a sizable amount of friction associated with using human therapists.
Contemporary AI is nearly friction-free in comparison.
There’s more to the matter.
People tend to like the sense of anonymity associated with using AI for this purpose. If you sought a human therapist, your identity would be known, and a fellow human would have your deepest secrets. Users of AI assume that they are essentially anonymous to AI and that AI won’t reveal to anyone else their private mental health considerations.
Another angle is that conversing with AI is generally a lot easier than doing so with a human therapist. The AI has been tuned by the AI makers to be overly accommodating. This is partially done to keep users loyal, such that if the AI were overbearing, then users would probably find some other vendor’s AI to utilize.
Judgment is a hidden consideration that makes a big distinction, too. It goes like this. You see a human therapist. During a session, you get a visceral sense that the therapist is judging you, perhaps by the raising of their eyebrows or the harshening tone of their voice. The therapist might explicitly express judgments about you to your face, which certainly makes sense in providing mental health guidance, though preferably done with a suitable bedside manner.
None of that is normally likely to arise when using AI.
The default mode of most generative AI apps is that they avidly avoid judging you. Again, this tuning is undertaken at the direction of the AI makers (in case you are interested, here’s what an unfiltered, unfettered generative AI might say to users, see my analysis at the link here).
A user using AI can feel utterly unjudged. Of course, you can argue whether that is a proper way to perform mental health advisement, but nonetheless, the point is that people are more likely to cherish the non-judgmental zone of AI.
As a notable aside, I’ve demonstrated that you can readily prompt AI to be more “judgmental” and be more introspective about your mental health, which overrides the usual default and provides a less guarded assessment (see the link here). In that sense, the AI isn’t mired or stuck in an all-pleasing mode that would seem inconsistent with proper mental health assessment and guidance.
Users can readily direct the AI as preferred by themselves, or use customized GPTs that can provide the same change in functionality, see the link here.
The Balance Associated With Using AI
Use of AI in this context is not a savior per se, but it does provide a huge upside in many crucial ways. A recurring question or qualm that I am asked about is whether the downsides or gotchas of AI are going to impede and possibly mistreat users when it comes to conveying suitable mental health advisement.
For example, the reality is that the AI makers, via their licensing agreements, usually reserve the right to manually inspect a user’s entered data, along with reusing the data to further train their AI, see my discussion at the link here. The gist is that people aren’t necessarily going to have their entered data treated with any kind of healthcare-related privacy or confidentiality.
Another issue is the nature of so-called AI hallucinations. At times, generative AI produces confabulations, made-up seemingly out of thin air, that appear to be truthful but are not grounded in factuality. Imagine that someone is using generative AI for mental health advice, and suddenly, the AI tells the person to do something untoward. Not good. The person might have become dependent on the AI, building a sense of trust, and not realize when an AI hallucination has occurred.
For more on AI hallucinations, see my explanation at the link here.
What are we to make of these downsides?
First, we ought to be careful not to toss out the baby with the bathwater (an old expression).
Categorically rejecting AI for this type of usage would seem myopic and probably not even practical (for my assessment of the calls for banning certain uses of generative AI, see the link here). As far as we know so far, the likely ready access to generative AI for mental health purposes seems to outweigh the downsides (please note that more research and polling are welcomed and indeed required on these matters).
Furthermore, there are advances in AI that are mitigating or eliminating many of the gotchas. AI makers are astute enough to realize that they need to keep their wares progressing if they wish to meet user needs and remain a viable money-making product or service.
An additional twist is that AI can be used by mental health therapists as an integral tool in their mental health care toolkit. We don’t need to fall into the mental trap that a patient uses either AI or a human therapist – they can use both in a jointly smartly devised way. The conventional non-AI approach is the classic client-therapist relationship. I have coined that we are entering into a new triad, labeled as client-AI-therapist relationships. The therapist uses AI seamlessly in the mental health care process and embraces rather than rejects the capabilities of AI.
For more on the client-AI-therapist triad, see my discussion at the link here and the link here.
I lean into the celebrated words of American psychologist Carl Rogers: “In my early professional years, I was asking the question, how can I treat, or cure, or change this person? Now I would phrase the question in this way: how can I provide a relationship that this person may use for their personal growth?”
That relationship is going to include AI, one way or another.
The Bottom Line Is Encouraging
One quite probable view of the future is that we will inevitably have fully autonomous AI that can provide mental health therapy that is completely on par with human therapists, potentially even exceeding what a human therapist can achieve. The AI will be autonomously proficient without the need for a human therapist at the ready.
This might be likened to the Waymo or Zoox of mental health therapy, referring to the emerging advent of today’s autonomous self-driving cars. As a subtle clarification, currently, existing self-driving cars are only at Level 4 of the standard autonomy scale, not yet reaching the topmost Level 5. Similarly, I have predicted that AI for mental health will likely initially attain Level 4, akin to the autonomous level of today’s self-driving cars, and then be further progressed into Level 5.
For my detailed explanation and framework for the levels of autonomy associated with AI for mental health, see the link here.
I wholly concur with Dr. Insel’s suggested point that we need to consider the use of AI on an ROI basis, such that we compare apples to apples. Per his outlined set of pressing issues associated with the existing quagmire of how mental health care is taking place, we must take a thoughtful stance by gauging AI in comparison to what we have now.
You see, we need to realize that AI, if suitably devised and adopted, can demonstrably aid in overcoming the prevailing mental health care system problems. Plus, AI will likely open the door to new possibilities. Perhaps we will discover that AI not only aids evidence-based mental health care but takes us several steps further.
AI, when used cleverly, might help us to decipher how human minds work. We could shift from our existing black box approach to understanding mental health and reveal the inner workings that cause mental health issues. As eloquently stated by Dr. Insel, AI could be for mental health what DNA has been for cancer.
We are clearly amid a widespread disruption and transformation of mental health care, and AI is an amazing and exciting catalyst driving us toward a mental health care future that we get to define. Let’s all use our initiative and our energies to define and guide the coming AI adoption to fruition as a benefit to us all.