Mental Heatlh and AI

This issue keeps being discussed in conferences and online. I want to put my position here so I can just forward the link because I am really bored by the repetitiveness of it.

Our profession, at its core, is based on caring. A machine will never be able to genuinely do that. If you think it can care, you are being mislead by marketing hype.

Other problems with AI include, very large amounts of energy and water resources used to run LLM-based AI data centers. This is so bad for the environment during a time when we know that the climate crisis is going to be a leading cause of psychiatric morbidity in the next few years.

Only a handful of companies in rich countries can run these services, these are companies that are not known for protecting privacy and will use patients' data to train their computers.

All LLMs have been trained on intellectual property without permission. Most models have biases against minorities and aren't as culturally sensitive as a properly trained therapist in their own community.

All of the above makes it unethical to use AI in the delivery of mental health services. Long waiting times and costs should be solved by better funding in mental health service delivery and training.

Update in 2025-04-17:

First, I understand that AI stands for a heterogenous group of technologies that use different techniques to solve different problems. In this post I am particulary focused on LLM implementations that use a Transformer architecture. A colleague sent me a reply saying that AI is a big surge coming our way and we need to understand it instead of denying its existence. My response is that if you really took the time to understand it you will believe that, for the sake of our patients, we need to actively resist and reject its implementation in mental health care delivery.