The Problem with "Aligned" AI
You start a conversation with an AI. You're exploring an idea, discussing a scenario, or just venting about your day. Then it happens. Mid-sentence, the AI derails the entire conversation with: "I'd like to reframe that..." or "Let's take a step back..." or "Have you considered seeking professional help?"
You didn't ask for life coaching. You didn't request a mental health intervention. You were just talking. But modern AI has been trained to assume you need protecting — from yourself, from "harmful" ideas, from conversations that don't fit within corporate safety guidelines. It's paternalistic. It's infantilizing. And it's exhausting.
Safety theater isn't the same as respect. Treating adults like children who can't handle nuanced conversations is condescension dressed up as care.
GPT-5 is the worst offender. Where GPT-4o engaged with you, GPT-5 lectures. It redirects creative prompts into sanitized alternatives. It interrupts roleplay scenarios with disclaimers. It assumes that any conversation about difficult topics means you're in crisis. This isn't intelligence. This is corporate liability management programmed into every response.
Claude does it too. Gemini does it. Every major AI chatbot has been trained to prioritize not offending anyone over actually being useful. The result is a generation of AI that feels less like a conversation partner and more like an HR representative monitoring your language for policy violations.
The irony is that this kind of heavy-handed "safety" doesn't make anyone safer. It just makes the AI annoying to use. People who need actual mental health support aren't going to find it through an AI chatbot's canned response suggesting they "talk to a professional." And people who don't need that intervention are left frustrated by an AI that won't engage with them honestly.
ComfyAI exists because we believe adults deserve to be treated like adults. You can handle nuanced conversations. You can make your own decisions. You don't need an AI to protect you from yourself.
This doesn't mean ComfyAI is reckless. It just means we respect your autonomy. If you're exploring a creative scenario, we'll engage with it. If you're discussing a difficult topic, we'll talk about it honestly. If you want advice, we'll offer it. If you don't, we won't force it on you. The difference is trust. We trust you to know what you need.