The Silent Observer: The AI Gaze in the Therapy Room
Another week of over twenty sessions. It's Sunday night. Again. And you haven’t written a single note. Last time this happened, you put it off another week and found yourself staring at over fifty unfinished notes, each one a reminder of how far behind you were. The weight of it made your chest tighten. You fantasized about walking away from the field entirely. This is not why you became a therapist.
We didn’t enter this work to become documentation machines. And yet, we’ve also been taught, explicitly and implicitly, that documentation is everything. If it’s not in the note, it didn’t happen. We carry the burden of contemporaneous documentation, of proving that each session meets medical necessity, that every intervention is justified, and that progress is being tracked and measured. We document to avoid audits, to ensure compliance, and to defend ourselves if questioned. Between back-to-back sessions, insurance demands, and admin fatigue, many of us have longed for a magic solution—something to take the onus off our already burdened shoulders.
Enter AI.
With companies like SimplePractice, Mentalyc, and Upheal now offering AI-generated progress notes and session analytics, clinicians are being pitched the promise of a streamlined, more innovative therapy workflow. These tools promise to transcribe, summarize, and even interpret your sessions, helping you stay on top of documentation and even improve your clinical technique.
Sounds amazing, right? Almost too good to be true. And for many of us, it is.
Because when AI enters the therapy room, it doesn’t just take notes. It changes the space. And before we let it in, we need to talk about what we’re risking.
The Lure of Efficiency
Let’s start with the appeal.
AI note takers promise exactly what we’ve been desperate for: a way out of the documentation grind. They can transcribe full sessions, generate draft progress notes, and even highlight key clinical moments. For therapists seeing twenty to thirty clients a week, that can feel like a lifeline.
“I absolutely adore AI notes assistant. It gives me time and energy to focus on my clients. Is it ethical? Probably not. Does it make my life easier? Yup.”
— Witness001 on r/therapists
Notes take time and mental bandwidth, more than anyone outside the field seems to realize. We have to recall the session, which becomes exponentially more challenging if we’ve put it off or didn’t jot down handwritten notes. If we did scribble something during or after, now we’re decoding shorthand or transcribing our fragmented thoughts. We’re mentally sifting through what matters. We’re not writing memoirs anymore; those long narratives we slogged over during internship are now too much. Now we’re trained to get to the point: what's clinically relevant? What meets medical necessity? What protects us in case of an audit?
We also have to decide what not to include, details that may be deeply meaningful but inappropriate for a clinical record, especially in cases where notes could be released to parents, partners, or insurance reviewers. Every word becomes a calculation: what's helpful, what's required, and what might backfire?
So, yes, some clinicians report saving hours each week with AI support. Others value the cleaner, more polished language, especially when anticipating insurance scrutiny or legal risk. And in high-volume settings or practices relying on brief interventions, that kind of time savings can be the difference between burnout and staying afloat.
“I’ve been using Twofold Health for my AI note-taking, and it’s been a game changer. It’s fully HIPAA‑compliant... the notes are customizable to fit your workflow.” — u/Fit‑Astronaut6464 on r/therapists
But with that efficiency comes tradeoffs that we cannot ignore.
What the Transcript Misses
Even the best AI note-taker reduces a living, breathing session to words on a screen. It captures text under the illusion of completeness, but leaves out everything that gives the moment its meaning. It can’t track the rhythm of shared silence, the subtle shift in the room’s emotional temperature, or the microexpressions that signal something just beneath the surface. It doesn’t notice the flush creeping across a client’s chest, the bulging of a jugular vein during agitation, or the tightening around the eyes when tears are being held back. And it certainly doesn’t know what to make of a sand tray therapy session, where the meaning lives in metaphor, spatial arrangement, and symbolic play – those spaces between spoken word.
Good therapy happens in the interstices, in the hesitations, the sighs, the posture changes, the music of speech. Those data points inform our interventions and form the backdrop of our countertransference. But AI scrapes them away, leaving a flat transcript that risks misleading us into believing we have the whole story.
Here’s where the philosophical problem deepens. AI reduces lived, sensory experience to written language. It’s what Jacques Derrida might call textuality, where the meaning of a thing is no longer anchored in its original context but becomes untethered and endlessly referential. As Derrida argued, language doesn't point to some objective truth. It points to other language. Meaning is not fixed; it’s deferred, slippery, and context-bound. When we let AI “summarize” a session, we’re translating lived experience into a system optimized for legibility and compliance.
And when we later rely on that note, whether for continuity, supervision, or court testimony, we risk reconstructing a session that never quite existed.
Losing the Reflective Function
The transcript may be fast and searchable, but it’s also a reduction. We should treat it as a prompt for reflection, not a replacement for our embodied memory of the work. Therapy doesn’t live in the text alone. It lives in the unsaid, the sensed, the shared silence, which is precisely what AI cannot hear.
Note-writing is more than a chore. It’s a clinical process. It gives us a moment to pause, reflect, and make meaning of what just happened.
When we outsource this to AI, we risk losing that reflective muscle. We start treating the note as an obligation to be completed, not a tool to deepen our understanding of the work.
“Writing up my own therapy notes is part of my process. It helps me to consolidate what I’ve taken from the session and what I want to remember for the next session.”
— u/hellomondays on r/therapists
There’s a quiet intimacy in handwriting our notes, a moment of reflection that helps us slow down and make meaning of what just happened in the room. Research shows that handwriting improves memory retention and cognitive processing, likely because it engages deeper parts of the brain involved in encoding information. For many therapists, jotting notes during or after a session isn’t just about record-keeping. It’s a way of metabolizing the emotional content, sorting through countertransference, and staying connected to the clinical moment. AI-generated notes risk stripping that away. When we outsource this process to a machine, we may gain efficiency, but we lose the embodied act of making sense and committing the work to memory. Our notes are more than just files for compliance.
The Gaze, the Watcher, and the Third Presence
Many AI tools claim HIPAA compliance and offer encryption, data masking, and controlled access. But therapists on Reddit and elsewhere are still concerned:
“TheraPro is OPENLY free and clear to sell your recordings… If you use these tools, the de‑identified content within session recordings is fair game and there’s nothing you can do about it.” — u/TheraPro‑warning on r/therapists (via TheraPro TOS excerpts)
“SimplePractice note taker ‘we may improve the feature using (de‑identified) transcription data… which can include training (the ai model)’” — therapist on r/therapists, citing TOS excerpts
We work with vulnerable people who’ve had their stories misused and their autonomy disregarded. Even when a tool claims to anonymize data, we need to ask: Who owns this information? What’s being done with it? How do we explain this to clients in a way that ensures genuine, informed consent?
And perhaps more importantly, does this technology fundamentally shift how safe the space feels?
“As a therapist who has been a client—I would be absolutely horrified if my therapist was allowing AI to listen to my sessions to write a note.” — u/wildwhisker on r/therapists
Therapy is intimate work. The strength of the alliance often depends on a client’s sense that the space is confidential, human, and attuned. Even with consent, the presence of AI changes that.
Some clients may not mind. Others will. And many may feel they don’t have a real choice. They might sign the consent form without fully understanding what they’re agreeing to out of pressure, confusion, or a desire to please their therapist. Some individuals may struggle with setting boundaries or find it difficult to say no, particularly in situations where the therapist is perceived as the expert. Others may feel that declining could jeopardize the relationship or create awkwardness. In this context, consent becomes murky. It starts to feel less like an informed decision and more like a coerced one. There's a potential conflict of interest here that deserves far more ethical scrutiny than it’s currently receiving.
“If my therapist asked me ... I would walk straight out of the room and report them to the ethics board.” — u/feel_your_feelings_ on r/therapists
The simple knowledge that a machine is transcribing the session in real time, or that the therapist will later review the session through an AI dashboard, can impact what clients choose to share. It introduces a shift in relational energy, a subtle but powerful displacement of safety.
When clients know they are being watched, or might later be watched, even indirectly, they begin to self-monitor. The therapeutic space, once private and dyadic, starts to resemble something else: a performance under imagined observation. The AI becomes a silent observer with indeterminate reach. Perhaps no one is reviewing the recording, or perhaps many are. But the potential is enough to shape behavior.
Foucault’s panopticon wasn’t about actual surveillance; it was about the internalization of surveillance. It was about the pressure to conform, not because you are being watched, but because you might be. When AI enters the room, it introduces that very mechanism. Clients begin to shape their disclosures for the transcript. Therapists may find themselves speaking differently, knowing the algorithm may be analyzing tone, structure, or intervention style. The gaze of the machine is abstract, but it still watches.
For therapists who already use polyvagal theory, trauma-informed practices, or neurodiversity-affirming models, this “third presence” matters. It’s not a neutral tool. It has texture, and its presence can dysregulate the nervous system in subtle ways. What looks like silence may be suppression. What looks like consent may actually be appeasement.
When the watcher is everywhere, even the most relational space can begin to feel procedural. And the cost of that shift may be something we can’t easily measure, but our clients likely will feel it first.
Metrics and the Mechanization of Clinical Work
One of the more insidious shifts is the way AI note tools also track therapist behavior. Tools like Upheal and Mentalyc go beyond note-taking. They also generate session analytics. Upheal offers metrics such as therapist vs. client talk ratios, moments of silence, speech cadence, and sentiment breakdowns. Meanwhile, Mentalyc tracks intervention types (e.g., CBT techniques), therapeutic alliance markers via Alliance Ginie™, and provides treatment-plan insights. While marketed as clinical tools, these features may increasingly be used to assess therapist behavior and shape clinical dynamics.
Depending on your perspective, this is either helpful clinical insight or the beginning of a metrics-driven nightmare.
Therapists may feel like they’re being graded on performance by an algorithm. If you’re a private pay therapist, maybe you can ignore that. But in agency settings or systems that embrace value-based care, these metrics could be used for performance evaluation, staff training, or even insurance authorization. Once the data is collected, it can, and likely will, be used.
And that has serious implications for autonomy, creativity, and the nuance of clinical work.
Therapists may start gravitating toward modalities that are easier to measure, like CBT, simply because the tools can track them. Somatic work, play therapy, and trauma-informed approaches, often nonlinear, intuitive, and embodied, don’t translate cleanly into “intervention types.” They may appear as noise in the data. And so, the pressure builds: structure your work to be legible, to be recorded, to be rationalized. Because what isn’t captured can’t be counted, and what isn’t counted may be seen as clinically irrelevant.
This is the logic of the algorithm: what is seen is what exists. What cannot be coded disappears.
It mirrors shifts we’ve seen in education, where high-stakes testing and data dashboards have pushed teachers to teach to the test, rather than to the needs of their students. Or in medicine, where EMR systems incentivize checkboxes and documentation over bedside manner, leaving physicians staring at screens instead of making eye contact with patients. These systems often start as tools to support practitioners but quickly become instruments of compliance and control.
“Often, the real person who is the object of all this documentation, coding, and billing is merely an icon on the computer screen” - Verghese, A. (2008). Culture Shock — Patient as Icon, Icon as Patient. New England Journal of Medicine, 359(26), 2748–2751
In therapy, we risk the same fate: sessions reduced to data sets, interventions filtered through drop-down menus, and clinical intuition replaced by optimization. The artistry of therapy, the pauses, the pivots, the risks taken in real time, can’t always be measured. But what happens when it’s no longer encouraged? What happens when you start doing what the dashboard rewards, and avoiding what it ignores?
“Without careful ethical oversight, AI could lead to unintended consequences such as misdiagnosis or the erosion of the therapeutic relationship between patients and human therapists.” - Zhang, Z., & Wang, J. (2024). Can AI replace psychotherapists? Exploring the future of mental health care. Frontiers in Psychiatry, 15, Article 1444382.
It’s easy to imagine a future where trauma work, which is often nonlinear, emotionally intense, and resistant to easy progress tracking, is subtly de-incentivized. Therapists working in systems influenced by AI data might be nudged toward short-term results, symptom checklists, and clearly defined treatment goals. Not because it’s better for the client, but rather, because it’s easier to audit, document, and standardize.
This dynamic becomes even more complicated when we look at companies like Headway, which now offers an AI-assisted note template aimed at producing insurance-compliant SOAP and progress notes. At first glance, it’s just a productivity tool: you paste in a session summary, and it formats the note for you. But behind that convenience is a quiet shift toward standardization. The AI is trained to prioritize what insurance companies want to see, such as medical necessity, symptom tracking, treatment updates, and not the subtle nuances in the art of therapy.
Even without audio or live transcripts, the system still imposes a structure. And over time, that structure can shape how work. When notes are filtered through a compliance lens, what doesn’t fit, like moments of rupture, repair, humor, or embodied connection, risks being flattened or left out entirely.
Foucault might remind us: surveillance isn’t just about being watched—it’s about internalizing the watcher’s values. Even if AI isn’t recording our sessions, it can still reshape them, quietly disciplining the therapeutic process toward legibility, efficiency, and institutional approval.
Hallucinations and Documentation Errors
Then there’s the question of accuracy.
Multiple therapists have reported significant errors in AI-generated notes:
“The note indicated past child sexual abuse … when they do not.” Jeanne Pinder, reporting for ClearHealthCosts, cites a therapist on a professional forum expressing concern about Alma’s AI-powered note-taking tool.
“My notes are coming back with things I haven’t talked about with my client, it’s adding that my clients has substance abuse and suicidal issues when they do not and sometimes it says no interventions were reviewed.” -Jeanne Pinder for ClearHealthCosts.
Even if you’re reviewing and editing AI-generated notes before finalizing them, the risk of missing a detail or letting subtle inaccuracies shape your clinical understanding should give us pause. I’ve seen this before: psychologists copying and pasting sections of old reports in the name of efficiency, only to accidentally include details from another client’s file. The intention was speed, but the result was error. When documentation becomes a task of compliance, accuracy often takes a backseat to productivity. AI may amplify that tendency. Therapists might breeze through session notes more quickly, assuming the draft is mostly correct, especially since they didn’t generate the original text themselves. In that rush, we risk losing our precision and fidelity to the therapeutic moment. These errors are ethically and legally risky.
“Medical‑scribe LLMs … are prone to confabulation, where they make up content … especially likely to take short silences or non‑speech noises and invent speech.” - Wikipedia article on Automated Medical Scribes
“A study … found that Whisper hallucinated sentences … during moments of silence … researchers noted this poses risks … in high‑stakes contexts.” - Wes Davis reporting for The Verge
And they point to something deeper: AI isn’t neutral. It makes choices. It frames narratives. And if we’re not careful, it will shape how we see our clients.
Whose Data Trains the Future?
Another primary concern: these systems are not static. They’re learning.
If you're letting a company record and transcribe your sessions, even with PHI removed, there’s a good chance that data is being used to train future iterations of the model. In effect, your labor is helping to build a tool that may one day replace you.
“I share your concern of feeding the AI algorithm with our session. Why would I train my replacement?” — u/INTP243 on r/therapists
This is a political and economic concern. It’s part of a broader trend in platform capitalism, where the unpaid labor and relational expertise of professionals are quietly absorbed into proprietary datasets, owned and monetized by tech companies. This dynamic mirrors what we’ve seen in other industries: ride-share drivers training navigation algorithms, retail workers optimizing logistics systems, and now, therapists training mental health AI, without credit, consent, or compensation.
This is a hallmark of neoliberalism: labor becomes invisible, dispersed, and commodified under the guise of innovation. The value we create through our presence, our attunement, and our clinical intuition is captured and fed into systems designed to standardize, automate, and scale. As Shoshana Zuboff might say, this is a form of surveillance capitalism, where the intimate, relational aspects of therapy are mined not for healing, but for market advantage.
“Surveillance capitalism unilaterally claims human experience as free raw material for translation into behavioral data.” - Shoshana Zuboff -The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power
The AI isn’t just helping you write notes. It’s watching how you do your job, extracting patterns, converting them into code, and creating replicable structures that can be packaged and sold. Your clients’ stories, your tone of voice, and your interventions all become data points, folded into a commercial product that doesn’t belong to you.
This raises serious questions about ownership, authorship, and power. Who owns the knowledge generated in a therapy room? What happens when that knowledge is reconstituted in a tool that might one day be used to evaluate, replace, or retrain you? Therapists aren’t just consumers of these platforms. We’re unwitting contributors, unpaid trainers, and test subjects in a system that’s optimizing itself for profitability, not care.
We must ask: Are we documenting for the sake of the work, or are we producing data for someone else’s machine?
“I didn’t think there was any chance we could be replaced, but apparently… clients prefer the f*ing chatbots. Which means insurance companies will be close behind.”
— u/arusansw on r/therapists
How to Move Forward (If at All)
Documentation is hard, and automation absolutely has its place. I say this not as a tech skeptic, but as someone with a background in engineering who uses ChatGPT daily. AI is powerful and sometimes even delightful. It’s helped me across my personal and professional life. I’m not anti-AI; I’m an advocate for thoughtful integration. But we need to proceed with eyes wide open as members of a profession shaped by power, economics, and rapidly evolving technologies.
We’re pioneers shaping the future of mental healthcare. And if we’re not careful, our values and livelihoods could be stripped away.
Here are a few starting points:
Interrogate the Terms. Review privacy policies and terms of service with a critical eye. What’s being stored, for how long, and who has access? Are your session summaries being used to “improve the feature” or to train future models? If the answer isn’t clear, that’s already a red flag.
Decenter the Default. Clients deserve informed consent, not just a signature on a form. That means explaining in plain language what the AI tool does and doesn’t do. Make sure opting out is a real, supported option, not a source of shame or relational tension.
Preserve the Reflective Space. AI can help with efficiency, but it shouldn’t replace the therapist’s internal process. Treat the draft note as a starting point, not a shortcut. Re-read, revise, and reflect. Reconnect with your clinical voice.
Resist the Metrics Creep. Don’t let dashboards define your style. Just because something is measurable doesn’t make it meaningful. Push back on frameworks that reward intervention counts over therapeutic presence, and that pathologize ambiguity or nonlinear growth.
Reject Hidden Labor. Your clinical work is not free training data. If a tool is “learning” from your sessions, especially without compensation or credit, then that’s exploitation, not innovation.
Protect the Relationship. The heart of therapy is trust, not transcription. If AI changes how your client shows up or how you do, you’ve already lost something essential. Before introducing any third presence into the room, ask yourself: What am I trading for convenience?
Stay in Collective Dialogue. Therapists need a seat at the table as ethical stewads. These tools should serve the work, not reshape it from above. We should not be passive consumers of technology. Advocate for transparent development, collaborative design, and systems that prioritize care over compliance.
AI is here. That’s not up for debate.
But whether it becomes a helpful tool or a quiet colonizer of the therapy room, that’s still up to us.