- Mental Health / AI
- Posts
- Mental Health / AI- 12/10/24
Mental Health / AI- 12/10/24
A Call for Transparency in Mental Health AI Software: Data Sources and Limitations/Risks

Exploring the intersection between AI and Mental Health
Welcome to my weekly newsletter built to demystify the role of AI in mental health care, offering insights, research, and practical applications that can enhance both therapeutic practices and client outcomes. I also plan to shed light on the major concerns and challenges that AI brings to mental health.
What to expect each week:
๐๐๐ญ๐๐ฌ๐ญ ๐๐ง๐ง๐จ๐ฏ๐๐ญ๐ข๐จ๐ง๐ฌ: Stay updated on the newest AI tools and technologies being integrated into mental health practices, from chatbots offering support to advanced data analytics that inform treatment plans.
๐๐๐ฌ๐๐๐ซ๐๐ก ๐๐ง๐ฌ๐ข๐ ๐ก๐ญ๐ฌ: Dive into the latest studies that explore the efficacy of AI-driven interventions and their impact on traditional therapy models.
๐๐ญ๐ก๐ข๐๐๐ฅ ๐๐จ๐ง๐ฌ๐ข๐๐๐ซ๐๐ญ๐ข๐จ๐ง๐ฌ: As AI becomes more prevalent, it raises essential ethical questions. My newsletter will tackle these challenges, discussing topics like data privacy, algorithmic bias, and the implications of relying on technology in therapeutic settings.
๐๐ง๐ ๐๐ฎ๐๐ก ๐๐จ๐ซ๐!
To follow along on this journey, I invite you to also connect with me on LinkedIn by clicking below ๐:
Also, you can connect with me on X (Twitter) here: @MentalHealthAI
In the Newsโฆ
Hereโs a step in the right direction regarding establishing trustworthiness of content generated by AI.

A Call for Transparency in Mental Health AI Software
This will be an ongoing series over the next few weeks, highlighting different areas requiring greater levels of transparency.
Data Sources and Training
AI needs DATA to train and learn. Where is that data coming from? With many apps, your guess is as good as mine because they donโt share this info. In theory, data sources for AI mental health apps should come from specific sources related to mental health treatment such as therapy transcripts and research articles. BUT, LLMs continue to create agreements with major social media companies to purchase their info to train their AI models.
Just think, your AI mental health chatbot could be providing feedback based on a TikTok โinfluencerโโฆ
The first step in improving this area is to require all mental health AI software to fully disclose the inputs of their LLM. Probably wonโt happen without government regulation, so itโs on you to seek out this info.
If a company isnโt willing to provide this info, whatโs that telling you??
Limitations and Risks
AI software should clearly communicate their limitations, such as inability to handle crisis situations, potential inaccuracies, or when their guidance is based on incomplete data.
This is especially important related to high-stakes situations like suicidality.
AI software CANNOT market itself as therapy. That can only be done by a licensed professional. If you encounter any type of app, chatbot, site, etc. that markets itself as โAI therapyโ, stay far far away. This is unethical, and some could argue illegal promotion of services.
The first step is AI software CLEARLY stating their risks and limitations in a prominent place on their website/app.