AI Cannot Feel What You Feel
The most fundamental truth about AI in mental health is this: AI cannot feel human emotions or empathy. It does not know if you are crying while typing. It does not sense the desperation behind your words. It responds based on what you write — nothing more.
This does not mean AI is wrong or useless. It means we must be honest about what AI can and cannot do — especially when a person is at their most vulnerable.
The Blind Dependence RiskThe greatest danger is not that AI gives wrong answers — it is that people accept those answers without questioning. In mental health, blind dependence on AI responses without professional consultation can cause serious psychological harm.
7 Rules That Must Govern AI Mental Health Tools
As someone who has studied psychology and human behaviour closely, here are the ethical rules I believe must govern every AI tool used in mental health support:
Mandatory Honest Disclaimer
Every AI mental health tool must display a clear message before every single session — not buried in terms and conditions, but prominently visible:
Crisis Detection Protocol — Zero Tolerance
If a user mentions suicide, self-harm, or severe distress — the AI must immediately stop giving general responses and provide emergency helpline numbers and strongly recommend immediate human professional help. This is non-negotiable.
No Blind Agreement — Encourage Verification
AI mental health tools must be programmed to actively encourage users to verify responses, seek second opinions, and never present their output as a final diagnosis or treatment plan.
Individual Context Awareness
AI tools must ask meaningful context questions before responding to mental health concerns — to prevent dangerous one-size-fits-all responses:
Absolute Data Privacy Protection
Mental health conversations are among the most sensitive data a human being can share. Every AI platform must guarantee:
Continuous Specialised Training
AI companies building mental health tools must partner with licensed psychologists and psychiatrists — not just engineers. Models must be updated regularly and tested across diverse cultural and demographic backgrounds, including India-specific mental health contexts.
Human Oversight — Always Available
Every AI mental health platform must maintain licensed human professionals who regularly review AI responses, and provide users with a clear, easy one-click path to connect with a real human counsellor at any point in the conversation.
AI in Mental Health — With Ethics, Not Without
I am not against using AI for mental health support. In fact, I believe it is one of the most powerful opportunities of our generation — to bring psychological support to millions of people in India and around the world who currently have no access to it.
But without the ethical guardrails outlined in this policy, AI mental health tools risk causing serious psychological harm to the most vulnerable people in society — the very people they are supposed to help.
The answer is not to stop using AI. The answer is to use it responsibly, transparently, and always with the human being at the centre.