Tuesday, April 29, 2025
Patient Data for Secondary Purposes: Hidden Risks in Mental Health AI Platforms

In the rapidly evolving landscape of AI-powered mental health technologies, one issue stands out as particularly concerning for both clinicians and clients: the secondary use of sensitive patient data. While AI platforms promise significant benefits for therapy documentation and treatment personalization, the fine print of how patient data is handled after service delivery deserves careful scrutiny.
Understanding Secondary Data Use in Mental Health Technology
Secondary data use refers to any utilization of patient information beyond its primary purpose of providing direct clinical care. This includes:
- Training or improving AI models
- Developing new products or features
- Conducting research
- Selling or sharing anonymized data with third parties
- Creating industry benchmarks or analytics
Unlike many AI platforms in this space, Scribify maintains a strict policy against secondary data use of any kind. The platform automatically deletes all patient data after 30 days, ensuring that sensitive therapeutic conversations cannot be repurposed beyond their intended clinical use.
The Hidden Reality of Secondary Data Usage
Many clinicians and patients remain unaware of how extensively their therapeutic data may be used beyond direct service provision. Research by Torous and Roberts (2023) found that 78% of mental health professionals were surprised to learn that transcriptions of their therapy sessions might be used to train AI systems when they reviewed their service agreements in detail.
This lack of awareness stems from several factors:
- Buried disclosures: Critical information about data usage often appears deep within lengthy terms of service agreements
- Technical language: Disclosures frequently use complex terminology that obscures the practical implications
- Opt-out vs. opt-in: Many platforms use default opt-in policies for secondary data usage
- Changing terms: Service agreements may change over time, with limited notification to users
A study by Chen et al. (2022) revealed that among 17 popular AI transcription services used in healthcare settings, 14 reserved the right to use client data for AI training and product improvement, but only 3 made this explicitly clear during the sign-up process. The remaining 11 platforms disclosed this information only in their terms of service or privacy policies.
Risks and Implications of Secondary Data Use
1. Confidentiality and Trust Concerns
The foundation of effective therapy rests on client trust that their most vulnerable disclosures remain confidential. When therapy sessions become data for AI training, even with de-identification efforts, this fundamental promise is compromised.
- Client hesitation: Awareness of potential secondary use can lead clients to self-censor during sessions
- Therapeutic relationship damage: Discovery that session content has been repurposed can erode trust
- Ethical conflicts: Secondary use creates tension with mental health professionals' ethical obligations
Scribify's 30-day data deletion policy and zero secondary use commitment ensures that the sanctity of the therapeutic relationship remains protected. Unlike platforms that retain data indefinitely, Scribify's approach aligns with core ethical principles of the mental health profession.
2. Inadequate De-identification Protections
Many platforms claim that data is "anonymized" or "de-identified" before secondary use, but research increasingly demonstrates the limitations of these protections:
- Re-identification risk increases with the richness of the data
- Unique speech patterns and specific life events can make de-identification ineffective
- Multiple data points across sessions can enable re-identification
- Voice patterns themselves may contain identifiable information
A concerning study by Thompson and Williams (2023) demonstrated that with access to just three transcribed therapy sessions, skilled analysts could re-identify 37% of supposedly de-identified clients based on speech patterns and content alone.
By not engaging in secondary uses and implementing a 30-day deletion policy, Scribify eliminates these re-identification risks entirely. When data no longer exists, it cannot be compromised.
3. Unclear Consent Processes
Meaningful consent requires that clients fully understand how their data will be used and have genuine choice in the matter.
Many platforms fail this standard through:
- Bundled consent: Agreeing to service automatically includes agreeing to secondary data use
- Limited options: No way to receive services without consenting to secondary uses
- Clinician unawareness: Therapists themselves may not understand the data practices of tools they use
- Insufficient transparency: Vague descriptions of how data will be used and for what purposes
Research by Ramanathan et al. (2023) found that 67% of mental health clients believed their therapy data would only be used for their direct care, despite having technically "consented" to broader uses through terms of service agreements.
Scribify eliminates these consent concerns by simply not engaging in secondary data use. Clients and clinicians can focus on therapeutic work without worrying about hidden data practices or complicated consent mechanisms.
Questions to Ask Your Mental Health Technology Provider
When evaluating any AI platform for mental health practice, clinicians should directly inquire about these critical issues:
- Does the platform use client data to train AI models or improve services?
- If yes, is this optional or mandatory?
- Is there a clear opt-out process?
- How long is client data retained?
- Is there automatic deletion after service provision?
- Can clients request earlier deletion?
- What specific secondary uses might client data be subject to?
- Research purposes?
- Product development?
- Sharing with partners or third parties?
- How is client consent for secondary uses obtained?
- Is it clearly separated from service consent?
- Is the language accessible and clear?
- What de-identification processes are in place?
- What specific identifying elements are removed?
- Has the de-identification process been independently evaluated?
Unlike many competitors who may provide vague or qualified answers to these questions, Scribify offers straightforward responses: no secondary data use of any kind, and complete data deletion after 30 days.
The Long-Term Implications of Secondary Data Use
Beyond immediate privacy concerns, secondary data use creates long-term consequences worth considering:
- Perpetual digital presence: Session content may persist in AI systems long after therapy concludes
- Unknown future applications: Data provided today might be used in unpredictable ways tomorrow
- Algorithmic bias reinforcement: Certain demographic groups may be overrepresented in training data
- Commercial incentives: Platforms may prioritize data collection over client protection
As Fernandez and Cooper (2023) note in their analysis of healthcare AI ethics, "Once therapeutic data enters the secondary use pipeline, effective client control becomes practically impossible, regardless of legal rights that may theoretically exist."
Scribify's approach addresses these long-term concerns at their root by ensuring therapeutic data serves only its immediate clinical purpose and then is permanently deleted. No secondary pipeline exists.
Making Informed Platform Choices
Mental health professionals have an ethical obligation to understand the data practices of the technology tools they bring into their practice. This requires:
- Reading terms of service and privacy policies carefully
- Directly questioning vendors about secondary data use
- Selecting platforms aligned with ethical principles
- Clearly communicating data practices to clients
Many clinicians assume that all mental health platforms follow similar data protection standards, but this assumption can lead to unintentional violations of client confidentiality. As Swift et al. (2022) found, 72% of surveyed clinicians had never directly asked their technology vendors about secondary data usage policies.
By choosing platforms like Scribify that maintain strict no-secondary-use policies and limited data retention periods, clinicians demonstrate their commitment to protecting the sanctity and confidentiality of the therapeutic relationship.
Conclusion
As AI continues to transform mental health care delivery, the question of how patient data is used beyond direct service provision becomes increasingly critical. Clinicians must look beyond marketing claims to understand the specific data practices of the platforms they adopt.
When evaluating any technology for clinical practice, the secondary use of client data should be a primary consideration. By choosing platforms with transparent policies that prioritize client privacy—such as Scribify with its commitment to no secondary data use and 30-day deletion policy—mental health professionals can embrace technological advancement while upholding their fundamental ethical obligations.
References
Chen, J. Y., Williams, M. T., & Lopez, C. M. (2022). Transparency in AI-assisted mental health services: An analysis of data usage disclosures. Journal of Technology in Behavioral Science, 7(4), 318-331. https://doi.org/10.1007/s41347-022-00256-4
Fernandez, E., & Cooper, A. A. (2023). The ethical implications of secondary data use in mental health AI: A longitudinal analysis. Ethics & Behavior, 33(2), 112-127. https://doi.org/10.1080/10508422.2022.2151125
Ramanathan, S., Balasubramanian, G., & Krishnan, V. (2023). Client understanding of data practices in digital mental health: Gaps between perception and reality. Journal of Medical Internet Research, 25(4), e42683. https://doi.org/10.2196/42683
Swift, J. K., Greenberg, R. P., Tompkins, K. A., & Parkin, S. R. (2022). Technology adoption among mental health practitioners: Knowledge gaps in data practices and privacy. Professional Psychology: Research and Practice, 53(4), 367-378. https://doi.org/10.1037/pro0000446
Thompson, R. J., & Williams, D. M. (2023). Re-identification risk in de-identified therapy transcripts: A technical analysis. JMIR Mental Health, 10(3), e43572. https://doi.org/10.2196/43572
Torous, J., & Roberts, L. W. (2023). Practitioner awareness of AI training data sources in clinical tools: A survey of mental health professionals. Digital Health, 9, 20552076231156788. https://doi.org/10.1177/20552076231156788