AI Must Support, Not Replace, Mental Health Professionals, Says White Paper

Amid the growing reliance on digital tools for mental health support, a new white paper from Jimini Health outlines principles for responsible artificial intelligence use in therapy settings
Jimini Health, a company developing AI tools for mental health care, has released a new white paper titled “A Clinical Safety Framework for AI in Mental Health.” The document offers a structured approach for incorporating large language models (LLMs) into mental health treatment while maintaining clinician oversight.
The release comes at a time of increased scrutiny over AI tools being used for emotional support, often outside traditional healthcare systems. The paper addresses concerns about safety, transparency and ethical responsibility, particularly as AI chatbots become more common sources of interaction for individuals experiencing mental health challenges.
“There is a clear mismatch between the number of people seeking care and the capacity of the clinical workforce,” said Dr. Johannes Eichstaedt, chief scientist at Jimini Health. “This gap is not only a resource issue, but also raises ethical questions about access. Our goal with this framework is to show that AI can help address the shortfall while still adhering to clinical standards.”
The paper outlines four main principles:
- AI should support, not replace, clinicians, with professionals overseeing care decisions.
- Tools must include safeguards to detect high-risk situations like suicidality, using “always-on, high-risk classifiers across multiple domains such as suicidal ideation, psychotic symptoms and noncompliance with prescribed medications.”
- All AI actions should have clear, reviewable rationales, as “each safety decision is accompanied by a traceable rationale that describes which classifiers were triggered, what level of concern was detected and which part of the safety policy was applied.”
- New features should be carefully tested before broader use.
“LLMs in mental health should ideally enhance that human connection and trust in the clinician, not take that away,” the white paper’s authors write.
To support its development and oversight efforts, Jimini Health has expanded its advisory board with the addition of Dr. Pushmeet Kohli, vice president of science and strategic initiatives at Google DeepMind, and Dr. Seth Feuerstein, executive director of Yale’s Center for Digital Health and Innovation.
Jimini Health, which raised $8 million in pre-seed funding last fall, offers an AI assistant, “Sage,” that operates under clinician supervision to provide patients with check-ins and action plans between therapy sessions, as well as administrative support for providers. The company also runs its own clinical practice across multiple states, where it tests its tools in live care environments before broader deployment.
“We built our system to prioritize safety from the start, rather than retrofitting oversight into an existing product,” said Luis Voloch, co-founder and CEO of Jimini Health.