from Hacker News

Show HN: ThoughtCoach: Helping to improve mental health with AI

by mtharrison on 4/9/23, 11:36 PM with 17 comments

  • by cguess on 4/10/23, 4:15 AM

    Do you have literally any experience with mental health? According to your registration the site is registered in the UK which means you'd have to qualify under these terms for a EuroPsy license (which are mirrored by the UK agencies as well) https://www.europsy.eu/national-requirements-uk. I'm going to guess you don't. You're putting yourself in serious legal liability for deploying something like this. Not to mention your disclaimer saying, more or less "we don't save anything but we send it to a company without any sort of business agreement, go check out what they do for yourself."

    If I were you I'd take this down immediately before you get in trouble or worse, someone gets seriously hurt.

  • by jawerty on 4/10/23, 6:29 PM

    Just to add to this: I see that you took it down for some valid reasons but don't be dismayed. This is a cool idea, probably not best to be implemented as a product to improve mental health but there's something here. If you find that this has been helping you personally then keep working on it. You may find what you end up building may not be the product you originally envisioned but solving a broader 'technical issue' around language models, perhaps in predicting emotion or understanding the sentiment behind the words.

    Overall, I'm just saying often these personal projects don't translate to the public well at first. But eventually you fine tune it for yourself enough and find the true value in it. Then you figure out how to translate that to the public

  • by ShamelessC on 4/10/23, 4:26 AM

    As someone with mental health issues who also sees GPT-4 as a profound step towards (and perhaps even the first instance of) general AI:

    No. No. No no no no no no no no. Soooo much no. You haven't thought this through enough and while I want nothing more than to help all of those in pain using my skills in technology; this is non-trivial and you have not done so here.

    I hope that isn't too harsh. I just think it can't be understated just how important it is to get this sort of an idea right on the first try. There should be _no_ tolerance for hacker mentality/move-fast-and-break-things, because the things you are breaking are people.

  • by nextworddev on 4/10/23, 2:01 AM

    What’s the plan if someone were driven to adverse psychiatric episode by this app? How do you plan on handling liability?
  • by Ephil012 on 4/10/23, 1:42 AM

    How do you prevent cases where the AI might say something harmful (e.g. suggesting the user kill themselves)?
  • by mapster on 4/11/23, 11:40 AM

    Can you subtly integrate this into a game?