Now Reading
ChatGPT for therapy? Houston, we have a problem

ChatGPT for therapy? Houston, we have a problem

The US Space Force has clamped down on its use of ChatGPT, citing security concerns and the need to ensure the adoption of AI is responsible. The walkback makes sense – the Space Force is under the Department of Defense, and the risks of introducing AI in the defense arena should be obvious. The same care, though, must be taken in another sensitive arena: mental health. For people undergoing mental health crises, unchecked technologies like ChatGPT will do more harm than good. The intrusion of big tech into mental health is no hypothetical – recently, a long-time employee and “Head of Safety” at OpenAI, the company behind ChatGPT was roasted on Twitter for equating ChatGPT with therapy.

Already, chatbots manufacture plenty of incorrect information. ChatGPT has been found to fabricate facts across many domains, including giving inaccurate cancer treatment advice. And in June, an eating disorder helpline fired its staff in favor of a chatbot, only for the bot to give advice more likely to worsen eating disorders.

More subtly though, the problem with chatbots is in their design: trained on existing texts and media, these systems will reproduce dominant narratives. And in an area like mental health, we are still very far from righting past wrongs. As recently as 1973,homosexuality was considered a mental illness, and still today, gender dysphoria – often a necessary diagnosis to receive gender-affirming care – is considered a pathology. We also know that marginalized communities are viewed under very different lenses when it comes to mental health. Black women, for instance, are over-diagnosed with borderline personality disorder (BPD), a diagnosis that invites significant stigma but is often confusedwith autism. Chatbots cannot avoid ingesting and recreating these harmful historical biases.

When it comes to nuanced human distress, we can’t trust that algorithms will make the right call. Take the case of Samaritans Radar, an ill-advised attempt in 2014 to use a bot to monitor unsuspecting individuals’ tweets and message their friends and family if that person was algorithmically assessed to be suicidal. Putting aside the tendency for people to troll on the internet, the phrases monitored – “tired of being alone”, “depressed”, and “need someone to talk to” – usually didn’t require such dramatic intervention.

Soon, an innocent conversation with ChatGPT could enter the morally murky waters of active rescue, a mental health helpline policy that dispatches emergency services if the caller is deemed to be at imminent risk of suicide. Although connecting people to support services sounds like a good thing, first responders in mental health crises are often the police. Added to the fact that people with mental health disorders are 16 times more likely to be killed by the police, this slippery slope leads to a chilling conclusion. Critically, less than 8% of people who express suicidal thoughts actually go on to attempt suicide. The line between active listening and active rescue could be the difference between help and harm. A human could figure that out – a bot that predicts risk score, and unilaterally deploys the police for responses that cross an arbitrary threshold, cannot.

Without a doubt, many types of technology have been beneficial to our mental health, as the expansion of telehealth during COVID-19 lockdowns made clear. But technology should be a bridge, not a stand-in.

Mental health resources are perennially scarce, and it’s tempting to replace community care and actual therapy with cheap techno-solutionist alternatives. But we can and must divest from individualist, techno-centric solutions. There are countless benefits to real, shared experience that cannot be copied in code. Group therapy, for example, helps tackle stigma and builds solidarity among participants, particularly for groups formed around marginalized identities. Similarly, peer support increases people’s sense of self-efficacy, belonging, and hope. Forget bots – fund helplines like Trans Lifeline and THRIVE that are by and for marginalized communities and deliberately do not call emergency services without consent.

Ultimately, I don’t blame people for trying to find comfort in the depths of ChatGPT. I, too, am a child of the internet – but I wonder whether lonely interlocutors might find more relief from genuine human connection. In my experience, what’s helped me in crisis has been the random strangers on the internet willing to lend a patient ear at all hours of the night. I’ll never know their names, but that doesn’t matter. I’ll pick the human over the bot any day.

© 2022 VISIBLE Magazine. All Rights Reserved. Branding by Studio Foray.

0

Your Cart