Some red flags may also be symptoms of mental illness or neurodivergence, but that's not what I was asking for. I wanted silly writing prompts, like "doesn't own a bedframe" or "calls his mom during the first date." Could the latter be done by someone with a mental illness? Sure. But it's not exclusive to people with a specific illness or neurodivergent trait. Red flags should not be equated with symptoms of neurodivergence.
I didn't say I wanted it scrubbed of offensive material. But when the user specifically asks for material not to involve something and it continues to deliver that thing, that's a problem of usability. That's like saying "tell me a story" and the AI describes a sexual assault, and then when you ask for a story that doesn't contain an assault it tells you a story with a far more graphic one. Would that not be horrifically triggering?
How many people have to be harmed or killed by a new technology before it becomes better regulated? Language generation models have been around a lot longer than you think -- but sure, ChatGPT specifically is new and shiny. So we're just supposed to excuse the problems it has? I think this kind of AI will be very useful in the future and have a ton of applications. The way it's being used right now is... not that.