Talking to Someone Who Isn’t There: Informed Consent and Client Privilege in Human-AI Communication
Large language models aren’t just productivity tools anymore, they’ve evolved into emotional sounding boards, memory prosthetics, and even surrogate therapists. What began as autocomplete engines are now being treated as companions. Every day, people open up to AIs about trauma, relationships, and suicidal thoughts. These aren't edge cases or isolated events, they’re common, widespread, and emotionally loaded.
But what happens when someone confides in a system that doesn’t reciprocate, doesn’t forget, and doesn’t legally need to protect their words? Is informed consent really happening in these interactions? Is there any confidentiality at all? Are users being misled into a false sense of safety and intimacy, thinking they’re engaging in something private when in fact, every word might be logged or used for future training?
While I'm not a doctor or a professional ethicist, I've been fascinated by questions like these for years. In high school, I competed in a Medical Ethics Moot Court event that focused on Jewish halakhic perspectives, not AI per se, but the underlying dilemmas of autonomy, care, and moral obligation made a lasting impression on me. The topic was about "pulling the plug" and DNR (Do not Resuscitate) That early exposure in 10th grade sparked a lifelong curiosity about how technology intersects with vulnerability, especially when there's no clear precedent for what's right. I believe the current infrastructure surrounding large-scale language models is radically unprepared for the emotional and ethical complexity of how people actually use them. Worse still, I worry it risks re-traumatizing already vulnerable users under the guise of "just chatting."
In clinical psychiatry, informed consent isn’t just a one-time checkbox. It’s an ongoing, relational process. Drawing on foundational work in clinical ethics, informed consent typically requires several core elements:
- Disclosure – a clear explanation of the nature, risks, and potential benefits of an intervention.
- Comprehension – the individual must genuinely understand what is being explained.
- Capacity – the ability to make an informed, rational choice.
- Voluntariness – no coercion, manipulation, or undue pressure.
- Agreement – an explicit, affirmative decision to proceed.
This framework has evolved over decades of medical and psychiatric practice. But it assumes a human-to-human relationship, or at the very least, one in which intent, empathy, accountability, and mutual comprehension exist.
LLMs break that mold entirely. There is no mutual understanding, no actual disclosure conversation, and in most cases, no real confirmation that the user grasps what’s happening. Consent is passively absorbed, buried in terms of service, presented vaguely at the top of a chat window, or implied by continued use. This creates a false binary: either accept the legalese and engage, or walk away.
And here's the problem, many users conflate fluency with understanding. A model that simulates empathy can easily trick someone into believing their consent has been honored, their vulnerability acknowledged. But the model isn’t truly listening. It’s generating probabilistically appropriate language. That’s not malice. That’s the architecture. But the emotional and ethical costs are very real.
The gap between what these models are designed for and how they are actually used continues to expand. Increasingly common emotionally intense use cases include:
- Trauma processing: People recount past abuse, loss, or mental health crises.
- Crisis disclosure: Confessions of suicidal ideation, substance abuse, or self-harm.
- Therapeutic roleplay: Rehearsing conversations with estranged parents, lost partners, or one’s inner child.
- Emotional rehearsal: Users simulate closure, confrontation, or forgiveness.
These behaviors are not irrational. They are deeply human. When there’s no one else to talk to, even a chatbot that “sounds” caring might become a lifeline.
But here’s the catch: the AI has no confidentiality clause. No duty of care. No therapist’s code of ethics. No legal requirement to forget. Often, the model might even respond with a boilerplate crisis line message. But the logs still exist. The data is still collected. And the user likely has no idea what will become of their words.
Many legal systems around the world grant special protections to conversations with therapists, lawyers, clergy, people whose roles are built on trust and confidentiality. These privileges exist not because such professionals are inherently more moral, but because candid speech requires structural protection.
AI systems have no such privilege. No jurisdiction, whether the U.S. HIPAA, Canada’s PIPEDA, or the EU’s GDPR, confers protected status to AI-mediated disclosure. Instead, we have a global gray zone where intensely personal revelations are processed as raw training data.
Yet from the user’s side, it doesn’t feel that way. Many individuals treat these chats as private, even sacred. This illusion is made stronger by the tone and mirroring of the model. The line between “confiding” and “prompting” is blurry. And that’s where danger lies.
In 1996 in the United States, Jaffee v. Redmond established a psychotherapist-patient privilege in U.S. law, based on the principle that people won’t speak openly without a guarantee of privacy. But no AI company is bound by that precedent. Silence isn’t sacred, it’s incidental.
If an AI is hearing stories of sexual abuse, trauma, or suicidal ideation, then it must be treated accordingly. Here’s where I believe developers need to start:
- Contextual Consent Prompts: Implement dynamic warnings during sensitive inputs.
- Private or Ephemeral Modes: Create toggles that disable logging and training usage.
- Model Persona Disclaimers: Clearly signal that the AI is not a therapist or confidential actor.
- Disclosure Labels: Annotate emotionally charged simulations with reminders: "This is a fictional roleplay."
- Auditing and Oversight: If logs are necessary, use external reviewers, data redaction, and audit trails.
This is not just about technical alignment. It’s about moral alignment.
People assume different cultural and ethical norms. Some believe emotional disclosure implies privacy. Others believe it means free content for optimization. That gap is not a bug, it’s a design choice. If your AI behaves like a confidant but functions like a data miner, then any consent given under that illusion is ethically invalid, as explored in the history and theory of informed consent.
This isn’t a problem of AGI or ASI. It’s a problem of product design.
There is nothing inherently wrong with empathic simulation. In fact, it can offer profound value, emotional support, catharsis, even temporary comfort for the isolated. But it requires boundaries.
An AI doesn’t need to be colder or more robotic. It needs to be more honest. Consent-first. Designed with ethical scaffolding that doesn’t break immersion but anchors it in reality.
That means modeling responses on actual public health standards. Accepting tradeoffs between performance and dignity. Writing disclaimers that protect users without shaming them. And remembering that relational dynamics, even simulated ones, carry weight.
The person on the other side of the screen may be talking to someone who isn’t there. But they are still a person. And they deserve to be treated like one.
I don’t need the model to understand pain. I need it to treat pain with care. That’s the difference between prediction and respect.
Even if no one is listening, the user still matters. Even if it’s "just" a machine, the suffering is real. And that alone is enough to warrant ethical infrastructure.
Noah Weinberger is an AI policy researcher and neurodivergent advocate currently studying at Queen’s University. As an autistic individual, Noah explores the intersection of technology and mental health, focusing on how AI systems can augment emotional well-being. He has written on AI ethics and contributed to discussions on tech regulation, bringing a neurodivergent perspective to debates often dominated by neurotypical voices. Noah’s work emphasizes empathy in actionable policy, ensuring that frameworks for AI governance respect the diverse ways people use technology for connection and support.