While I don’t agree that AI alone caused anything (because there had to be some instability there for the words of the AI to manipulate, I can absolutely agree that use of the AI is a very apparent contributing factor in the cause.
With suicide you need some very specific circumstances.
- Opportunity. A time and place where the person can’t or won’t be stopped from the attempt.
- A feeling of pain or helplessness that eclipses that person’s ability to deal with or find and outlet for.
- Means/mode. A bus, a rope and anchor point, a weapon.
- Intent.
I think the last one is where things get a bit murky from a legal standpoint.
Barring accidental suicide, what can legally be considered as responsible for causing suicide is limited. If you encourage a suicidal person to kill themselves, you as the other person had intent to harm, even if you didn’t mean for them to actually follow through, or believe that they would.
My fear is that these legal battles won’t result in the AI being held accountable because they’re not able to have intent.
My bigger fear is that the companies who are responsible are not going to be held responsible for the same reason a fun manufacturer isn’t when someone sticks the barrel in their mouth and pulls the trigger. The argument that it’s a “tool” that’s been “misused” is gonna be thrown around a lot.
I wish I could believe we’d get more stringent regulations out of such lawsuits. But I just don’t have that kind of hope.
MagnificentSteiner@lemmy.zip 1 day ago
lemmy.zip/post/56037025
Image