top of page

Mental Health Apps: the AI Psychologists

Aditi Biswas,

Editorial Intern,

Indian Society of Artificial Intelligence and Law.


 

As the world has grown more complex, the rise of mental health problems has been inevitable. Due to this, a very interesting phenomenon has been observed: the development of artificial intelligence-based mental health applications (MHAs). Accessible to anyone with a smartphone through various app-stores, they usually boast functions of improved mental health through relaxation exercises and stress management skills amongst others.

Behavioural scientists who study organizational behaviour believe that the idea of automating psychology has become out of date due to how evident it is that psychology is a humane profession specialising in empathy and intuitive skills that cannot be mimicked by a machine. However, AI has changed what we considered machines adapting human-like behaviour.

A psychologist’s job includes assessing the problems a patient is facing with computer-aided psychological tests, using psychology assessment/evaluative tools to diagnose the patient’s problems as a condition, formulating treatments or interventions for these conditions through therapy, and evaluating a summary of all of these parts.

Several MHAs are based on cognitive behavioural therapy that identifies and changes dangerous or destructive thought patterns that negatively influence emotion and behaviour.

The most common manifestation of AI in these mental health apps are chatbots. An AI chatbot is a software that can simulate a user conversation with a natural language through messaging applications and sometimes, through voice conversations. AI chatbots in MHAs are programmed with therapeutic techniques to assist people with anxiety and depression, but the promise of this technology is tempered by concerns about the apps' efficacy, privacy, safety, and security.

Any good technological advancement has both advantages and disadvantages. The most prominent advantages of MHAs are as follows.

  • They are often preferred over consulting human psychologists partly due to the stigma attached to mental health.

  • They provide their user-patients the convenience of having ‘medical’ assistance with mental health free of cost or at very low prices as per one’s personal schedule right on one’s cellular device.

  • Patient engagement is higher due to real-time engagement, usage reminders, and gamified interactions.

  • The use of pictures rather than text, reduced sentence lengths, and inclusive, nonclinical language creates a simpler user interface for patients to deal with, especially as their cognitive load is reduced.

  • Since mental illnesses tend to manifest simultaneously with others, MHAs with diagnosis methods that treat symptoms shared by multiple disorders increase patient engagement and treatment efficacy by reducing the commitment needed to interact with multiple apps.

  • Features of MHAs that let users increase their emotional self-awareness (ESA) by self-monitoring and periodically reporting their thoughts, behaviours, and actions has been shown to reduce symptoms of mental illness and improve coping skills.

  • The anonymity that MHAs offer their user-patients is unparalleled by human psychologists because human psychologists are professionally obligated to keep their patients’ personal information confidential whereas MHAs don’t directly communicate with the person thereby removing the dilemma of confidentiality.

  • MHAs are more consistent in their treatment than human psychologists due to their programming.

  • The most prominent disadvantages of MHAs are as follows.

  • There is a frequent lack of an underlying evidence base in them.

  • A lack of scientific credibility is also commonly noted.

  • Subsequently, their clinical effectiveness is limited.

  • They feed into cultivating an over-reliance on apps in their user-patients.

  • The equity in access that they provide may result in at-risk people accessing the same instead of seeking out professional help which may prove dangerous for them.

  • Increased anxiety in user-patients resulting from self-diagnosis.

  • Professional involvement of clinicians is not considered when these applications are developed hence ruling out professional intervention in case of potential red flags.

  • Most MHAs focus purely on one condition or disorder or illness whereas professional help is a more comprehensive process with a more inclusive treatment plan.

While an AI chatbot may provide a person with a place to access tools and a forum to discuss issues, as well as a way to track moods and increase mental health literacy, AI is not a replacement for a therapist or other mental health clinician. Ultimately, if AI chatbots and other MHAs are to have a positive impact, they must be regulated, well-informed, peer-reviewed, and evidence-based; and society must avoid techno-fundamentalism in relation to AI for mental health.

There are two kinds of laws under which health professionals fall which may or may not apply to MHAs. These are medical negligence laws and consumer protection laws, both of which intersect in the healthcare sector.

An act or omission (failure to act) by a medical professional that deviates from the accepted medical standard of care is known as medical negligence. Negligence is used as a tool to ascertain fault in a civil case where injuries and/or, losses or damages occur to a party. Medical professionals owe a certain standard of care to their patients which is, on the whole, accepted to be as the level and type of care that a reasonably competent and skilled health care professional, with a similar background and in the same medical community, would have provided under the circumstances which led to such negligence. However, a claim or a suit can be brought against a medical professional only if the negligence had a detrimental effect on the patient (damages), and if the harm caused to the patient was a foreseeable result of the medical negligence (legal causation). Medical negligence claims against psychologists involve the same requirements as a regular case of medical negligence. However, psychological harm is always more difficult to prove than other forms of harm.

An example of medical negligence in the case of a psychologist would be diagnosing a patient who has bipolar disorder, which includes manic episodes as well as depressive episodes, with merely clinical depression, and hence possibly prescribe them incorrect medication.

When it comes to health applications, developers of such apps will not fall under the ambit of medical negligence or medical malpractice due to such laws applying only to a doctor-patient relationship. However, consumer protection laws will be applicable in such a scenario. Specifically, product liability laws that provides the consumers with legal recourse for any injuries suffered from a defective product. A product is required to meet the ordinary expectations of a consumer, therefore, responsibility lies with the manufacturers and the sellers to ensure safety and quality of the product as per description. Since an MHA passes as a product, product liability laws are likely to apply to them. There is, however, a grey area in the law when it comes to applications available free of cost.

Also, some healthcare applications add an extra layer of protection for themselves through making their customer-patient-users sign digital consent forms. These forms generally ensure that the patient knows the risks and complications before a treatment begins and is aware of alternate treatment options. This provides the additional safeguard against any grievance or negligence lawsuits. Furthermore, some applications make these forms ‘tamper-proof’ hence not allowing any modifications after signing. Even though these forms must be compliant with the current law, they can be borderline exploitative of the desperate condition of their user-patients.

Additionally, in the case of medical institutions, corporate negligence and vicarious liability may prove effective instead of medical negligence. Particularly, the concept of ‘negligent credentialing’ might come into play if an MHA attached to a hospital uses an AI which has not had its credentials appropriately reviewed, similar to how a hospital must review the credentials of employees they hire like doctors and other staff. Due to the heavy commercialization of medical practices over the world, consumer protection laws do intersect with medical negligence law. However, here too, the Consumer Protection Act of India, covers all services provided by medical practitioners to patients, except those that are provided to them free of cost.

Stigma surrounding mental health, not only in India, but the rest of the world as well, limits people from demanding their fair share as patients or consumers. AI-based MHAs available free of cost, completely escape the scope of the laws that generally cover medical institutions, practitioners, or other applications. This is dangerous for patients struggling with mental illness or mental disorders due to the desperate situation they find themselves in wherein they have to rely on free applications for medical assistance. There need to be legal safeguards for the same due to the risk these situations pose to at-risk individuals and arguably, society as whole. Even in case of low-cost MHAs, only selective consumer protection laws apply to them. MHAs fall under the ambit of healthcare applications and hence they must be awarded within the legal scope of that access to medical negligence law.

Finally, for AI-based MHAs to be approved and readily available and accessible to all through app stores or the internet in general, there needs to be more stringent requirements for the same. The involvement of mental health professionals while developing these applications, getting them peer-reviewed by a group of professionals, and based on evidence from running trials with users (with their informed consent as well), might go a long way in the advancement and widespread effective use of MHAs.

Comments


Updates from our Newsletter, INDIAN.SUBSTACK.COM

The Indian Society of Artificial Intelligence and Law is a technology law think tank founded by Abhivardhan in 2018. Our mission as a non-profit industry body for the analytics & AI industry in India is to promote responsible development of artificial intelligence and its standardisation in India.

 

Since 2022, the research operations of the Society have been subsumed under VLiGTA® by Indic Pacific Legal Research.

ISAIL has supported two independent journals, namely - the Indic Journal of International Law and the Indian Journal of Artificial Intelligence and Law. It also supports an independent media and podcast initiative - The Bharat Pacific.

bottom of page