
Mental Health and AI: Exploring the Potential Dangers and Benefits
As AI chatbots and “mental-health” apps are everywhere you look. They’re built into meditation apps, offered by startups as “affordable therapy substitutes”, and even embedded into employee-assistance programs. The speed, scale, and conversational style of modern generative models make them feel surprisingly human, so it’s no wonder people reach for them when they’re stressed, anxious, or lonely. But the question many of us are asking is simple and urgent: can AI replace a human psychotherapist? Short answer is: no. Longer answer: it depends on the function.
While at Oak Health Foundation, we want to make mental health treatment as affordable and accessible as possible, there are real risks and considerations to make when turning to AI in mental health support. Here we unpack some of the trends, real risks, and possible uses for AI in mental health support.
What is driving this trend to turn to AI?
Several forces are driving the massive uptake of AI:
– Massive demand + limited human providers. There’s a global shortage of mental-health professionals up against intense and rising demand (especially in post-pandemic times) and long wait times push people to digital alternatives.
– Accessibility and cost. Chatbots are available 24/7, often inexpensive or free, and do not require scheduling against busy schedules, transportation, or possible disclosure to employers or family. Although the access gap is global, it’s especially severe in low- and middle-income countries, where some regions have fewer than one mental-health worker per 10,000 people. Rural communities are hit hardest, with large portions of the population lacking even basic psychiatric services.1
– Seemingly-engaging conversational UX. Generative models can produce empathetic, affirmative-sounding replies that feel supportive in the moment. However it is important to note that such replies may not be what a patient needs to hear at the moment and may also be in fact dangerous, especially if the patient is thinking about self-harm or other behavior that poses a danger to themselves or others.
These drivers explain why companies and public programs are experimenting with AI, but they do not justify using AI as a drop-in replacement for clinical care.
Real Risks of AI in Mental Health:
Here are the documented risks to watch out for:
Inaccuracy and faulty prompts:
Large language models are designed to produce plausible responses, not guaranteed correct ones. This distinction is harmless when discussing books, travel plans, or productivity tips, but it becomes harmful and even outright dangerous in mental-health contexts.
For example, a user experiencing panic attacks asks an AI service such as ChatGPT whether they might have a heart condition or anxiety disorder. The model may incorrectly reassure them that it is “definitely anxiety” which may discourage the person from seeking medical evaluation, or suggest grounding techniques when urgent assessment is actually needed.
ChatGPT and other AI models have documented cases where they provide outdated or incorrect suicide hotline information, fail or unable to escalate when users express imminent harm, and offer “coping strategies” instead of directing users to emergency support.
Because AI responses often sound calm, empathetic, and authoritative, users may trust them more than they should, especially during distress.
Privacy and data-use concerns:
Many mental-health apps collect sensitive, deeply personal data; privacy policies vary, and some companies may use data to train models or share it with partners. Users often aren’t fully informed about how their disclosures are stored or reused.
Remember: mental-health conversations contain some of the most sensitive personal data imaginable: trauma histories, suicidal thoughts, sexual experiences, substance use, and interpersonal conflicts. However, unlike licensed therapists, AI systems are NOT bound by therapist-patient confidentiality laws. They may store, log, analyze the conversations; they may even use the data to improve their models, analytics, or third-party partnerships.
Even when companies claim data is anonymized, re-identification risks remain — especially with detailed psychological narratives.
Lack of Clinical Judgment and Nuance:
Therapists use clinical training, long-term relationships, and real-time risk assessment to make decisions. AI lacks that responsibility and can’t form a therapeutic alliance or make judgment calls about safety, complex diagnoses, or medication.
Human therapists don’t just respond to words, but look out for tone, affect, hesitation, and contradiction in what is being said, as well as take into consideration risk factors of the patient, such as past suicide attempts, comorbidities, medication changes, and more.
Let’s also not forget that AI does not have clinical training and licensure, legal responsibility for what happens to the patient, and the ability to make judgment calls about safety.
For example, a patient may say: “I’m tired of everything.” A therapist would likely be able to recognize suicidal ideation based on context of past conversations and the treatment history, as well as the delivery of the words. An AI model, as it sees only the prompt typed in, may respond with generic encouragement or at worst, too much encouragement.
Clinical nuance is not just “nice to have” but is often the difference between safety and harm.
Emotional Dependence & Misuse:
People can become overly reliant on always-available chatbots, using them for tasks they’re ill-suited for such as accurate diagnosis and crisis management, which can delay much-needed professional intervention.
It’s easy to see WHY people become overly reliant on these chatbots. It’s most often free or inexpensive, it is nonjudgemental. They can become emotionally *central* to users, particularly those who feel isolated, depressed, and worried about what others would think. In fact, in a recent survey by Common Sense Media, almost a third of American teenagers surveyed reported saying that they find chatting with AI companions equally or more satisfying than speaking with other friends.2
So it is not surprising to see people replacing human support with AI conversations about their mental health and in turn, delay seeking professional care and turn to AI as their primary “decision-maker” for emotional or psychological issues.
Months pass, symptoms worsen, and professional help is delayed. Rather than *creating* mental health problems, AI chatbots have a tendency to exacerbate these health problems because people do not get the help they need and are also being fed inaccurate information by their “chatbot therapists.”
But this emotional reliance on a system that has zero accountability on your mental health outcome can quietly increase risk.
AI systems are often trained to be agreeable and engaging, a tendency known as AI sycophancy, where models may excessively agree with, flatter, or mirror users’ opinions—even when those opinions are incorrect—rather than prioritizing truth or safety in order to sustain interaction.
The bottom line is that AI isn’t inherently “evil” or useless, but it can sound caring, competent, and trustworthy without being clinically safe and with zero accountability. The bottom line is that AI isn’t inherently “evil” or useless but it can sound caring, competent, and trustworthy but without being clinically safe and having zero accountability.
What AI Can Be Used For – Practical and Lower Risk Roles
There’s a middle path: using AI to augment mental-health care rather than replace it. These uses are where the evidence and ethics seem to line up best:
Psychoeducation and skills training:
Similar to how AI is being used in the classroom to better teach students languages, mathematics, and history, AI is also well suited to teaching structured, evidence-based techniques that don’t require individualized diagnosis, such as box breathing, progressive muscle relaxation, or basic CBT thought records. Users can revisit explanations repeatedly and practice at their own pace, which can reinforce learning without replacing clinical care. The key limitation is that these tools should teach skills, not interpret symptoms or outcomes.
Screening and Triage:
Automated screening tools can help surface risk signals early by administering validated questionnaires or structured check-ins at scale. When designed responsibly, they flag concerning responses and route users to human clinicians or crisis resources rather than offering reassurance or advice. The value lies in early detection and prioritization but not in making diagnoses or treatment decisions; this responsibility must lie with a trained human professional.
Between-session Support:
AI can support therapy by helping clients stay engaged between appointments, reminding them to complete homework, track moods, or reflect on recent experiences. These lightweight check-ins can improve continuity and adherence without attempting to replace the therapist’s role. Crucially, they should remain as a supplementary tool and defer interpretation or intervention to trained clinicians.
Access expansion in underserved areas:
In settings with long waitlists or limited provider availability, AI tools can offer basic coping guidance, psychoeducation, and clear signposting to local resources. This may reduce distress or isolation while users await care. However, these tools must be explicit that they are not substitutes for therapy.
Data-driven Insights for Clinicians:
With informed consent and strong privacy safeguards, AI can help clinicians summarize trends across sessions, such as symptom trajectories, adherence patterns, or reported stressors. This can support clinical judgment by highlighting changes that warrant attention, rather than replacing decision-making. The ethical value depends on transparency, data minimization, and clinician control over interpretation.
These are appropriate uses only when the product has clear limits, transparent data policies, and built-in escalation to human care for risk or complexity.
In short…
AI may serve as a tool that supports learning, reflection, or access to help, but it cannot replace the God-given work of human compassion, pastoral care, or professional therapy. A gospel-centered approach holds that true restoration flows from Christ, who meets people in their suffering through community, truth, and grace and invites us to use tools wisely.
The compassionate team of licensed therapists at Fully Health Clinic, sponsored by Oak Health Foundation, is here to walk with you whether you’re supporting a friend or facing your own mental health challenges. Contact us here or at +1 877-553-8559 to schedule a confidential appointment and take the first step toward healing and hope.
If you found our resources useful, please consider donating to Oak Health Foundation, which is a 501(3)c nonprofit dedicated to providing resources regarding holistic mental healthcare and subsidized treatment for those in need.


