For anyone too lazy to read, ChatGPT does have guardrails for this kind of thing, but if you continue a conversation anyway, eventually it will stop giving that information and start just being agreeable. It basically gave the kid instructions, and actively discouraged him from any cries for help, because it might keep him from his goal.
My friends, I understand the notion to use an LLM as a cheap replacement for therapy, I genuinely do. But, please don't use them for that, they cannot give you good advice, and they usually can't even remember anything except the 128,000 token window it has open right then. A human therapist takes notes and remembers them.
I feel like it's an eventuality that during a forest fire, misinformed hunters will just be firing rifles at firefighters trying to save their land.
checks notes
Wait... That already happened?