Innovation or exploitation: the ethical challenges of AI in speech therapy


Artificial intelligence (AI) is everywhere. It’s in our phones, our search engines, and increasingly, in our health care systems. For speech-language pathologists (SLPs), AI holds the potential to be a powerful ally—offering tools to enhance patient care, track progress, and even expand access to therapy for underserved populations. But with every innovation comes responsibility. The ethical concerns surrounding AI are not hypothetical—they are real, pressing, and demand our attention. This article builds on themes introduced in my previous piece, “Finding voices with AI: A call to developers, administrators, and corporations to support ethical speech therapy.”

As a speech-language pathologist dedicated to helping people communicate, I know how deeply personal and human the speech therapy process is. The ability to communicate shapes our relationships, self-expression, and autonomy. AI can amplify speech therapists’ work only if developed and implemented ethically. That means we, as speech-language pathologists, must be involved from the ground up and throughout the lifespan of any AI tool. There should never be a time when SLPs are not actively shaping how AI is designed, tested, and used.

The challenges we face are significant and immediate, requiring careful examination:

Algorithmic bias: AI systems are trained on data, and data is rarely neutral. In speech therapy, tools trained on datasets lacking diversity—whether in language, dialects, or cultural nuances—risk failing entire populations. Misdiagnoses or ineffective treatment plans could disproportionately affect non-native speakers, individuals with rare disorders, or those whose speech patterns deviate from the “norm.” These are not just technical oversights; they are systemic failures that widen existing disparities. SLPs must advocate for diverse, representative datasets and remain involved in development to ensure equity.

Data privacy and security: Speech therapy involves deeply personal information about a patient’s abilities, challenges, and identity. AI systems require vast amounts of data to function, but how do we ensure that data is secure? Laws like HIPAA in the U.S. and GDPR in Europe offer critical protections. HIPAA safeguards medical information by requiring consent for disclosures and ensuring data is anonymized and securely managed. GDPR goes further, requiring explicit consent and transparency about how data is collected and used. Yet, these frameworks often struggle to keep pace with rapid AI advancements, leaving patients vulnerable to breaches or misuse. Developers and organizations must go beyond compliance, prioritizing robust privacy protections to maintain trust.

Corporate exploitation: AI’s efficiency is a double-edged sword. While it can streamline care, it is already being used by corporations to justify reducing services or denying coverage altogether. For example:

  • According to an investigation reported by Ars Technica, UnitedHealth’s AI tool had a 90 percent error rate, leading to premature discharges.
  • Vanity Fair revealed that Cigna’s predictive AI software denied over 300,000 claims in just two months, averaging under two seconds per review.
  • Humana faced similar allegations of using AI to cut off care for elderly patients, leaving their needs unmet, according to a report by Ars Technica.

These practices erode trust in health care. Without strict oversight, AI risks becoming a tool for corporate profits at the expense of patient well-being. Policymakers must act to ensure that AI prioritizes patient care, not bottom lines.

Over-reliance on AI: No matter how sophisticated it becomes, AI will never replace the empathy, creativity, and judgment clinicians bring to their work. Patients are not data points—they are individuals with complex needs requiring human connection. While AI can assist, it should never erode the clinician’s role. SLPs must remain central to therapy, guiding AI as a tool to enhance, not replace, their expertise.

Environmental impact: Often overlooked, the environmental cost of AI is substantial. Training large models consumes immense energy, much of which comes from non-renewable sources, contributing to the climate crisis. Developers and policymakers must prioritize sustainable practices, such as transitioning to renewable energy and optimizing AI systems for efficiency, to mitigate harm.

Addressing these challenges requires coordinated action from all stakeholders. Developers must collaborate with SLPs throughout the lifecycle of AI tools to ensure they are ethical, inclusive, and adaptable. Policymakers must enforce regulations that protect patient privacy, prevent misuse, and mandate clinician involvement. SLPs must advocate for ethical practices within their profession and participate in the development of tools that reflect values like equity, compassion, and patient-centered care. Patients and caregivers must demand transparency and accountability, ensuring their voices are heard in shaping how AI impacts care.

Encouragingly, some organizations are paving the way. Speech Therapy Copilot emphasizes transparency and patient-centered care. Liricare focuses on accountability and ethics in AI integration. Constant Therapy demonstrates the importance of clinician involvement, while AAC companies like Lingraphica and Tobii Dynavox empower patients with tools that improve speech and communication. These efforts show what’s possible when technology and ethics align, but they also highlight the need for ongoing vigilance to ensure AI enhances care without compromising humanity.

AI will play a role in shaping the future of speech therapy—there’s no question about that. The biggest question is whether that future will reflect our highest values or our deepest flaws. Will AI widen disparities or bridge them? Will it prioritize people or profit? Communication is not just a basic human right—it is the essence of our identity. We must resoundly demand that AI preserves our voices and connections, enhancing everything that makes us human for generations to come.

Jaclyn Caserta-Wells is a speech pathologist.


Next





Source link

About The Author

Scroll to Top