Boss Blog: Integrating AI into PT Care Delivery

By Eric Boss, PT, MBA

I’ve been a Physical Therapist for twenty-five years. My career has evolved into working primarily with data, and I’ve been given the gift of applying my clinical experiences in a different approach to helping improve access and care.

Without question, the challenge of how and when to integrate Artificial Intelligence (AI) into patient care is a conversation that is happening daily in every profession in multiple organizations. There is palpable excitement and incredible opportunity.  Healthcare, an industry that often lags behind the technology curve, is driving forward with enthusiasm into the AI space.

I was born in the 1970s. What computers can do these days is almost surreal to me. That’s not why I’m hesitant to go all in, though. I’ve used many of the Large Language Models and AI assistants for various tasks, including coding, analytics, refining concepts, research, and effective visualizations. They’ve been a great help to me. That’s not why I tend to slow down discussions on this topic.

When I was preparing to write this article, I asked four of the most common AI tools (Co-Pilot by Microsoft, Gemini by Google, ChatGPT by OpenAI, and Claude by Anthropic) to write a 500-word blog about the impact of AI in the delivery of Physical Therapy over the next three to five years. These are some concepts that all models share:

  • AI will deliver clinical insights using technological advances in monitoring movement and patterns, creating an individualized treatment plan tailored to each person.
  • AI will alleviate the burden of staffing shortages by improving telehealth and potentially acting as a virtual assistant, capable of answering questions for patients and reinforcing performance of exercise routines between sessions.
  • AI will assist with documentation by acting as a scribe to capture the most relevant elements of each treatment session, decreasing the burden of note-taking on therapists, freeing that time for more personalized interactions with patients.

These are ideals I share. I have an incredible depth of hope that this will be the eventual result of what happens when we integrate AI into care. I can’t help but pause, though, because I work with data for a living.

AI models need to be trained on huge amounts of data. The content available for training these models is the existing work we have done. If an AI were to look at virtually any therapist’s body of work in the twentieth century, it would have to conclude that regardless of the intervention, it was “tolerated well” by that patient on that day.

AI models are masters of aggregation. Are we providing enough information for an AI to know that three sets of ten may be the ideal number of sets and repetitions for everybody, but that it’s rarely the right prescription for anybody?

Sometimes a patient doesn’t “look right” and we need to adjust. Sometimes a person may be apprehensive about trusting an ACL repair enough to push into success. Sometimes they’re overconfident in the integrity of their rotator cuff repair. Sometimes they have an emotional or neurophysiological avoidance that’s a barrier to progress.

My concern is that as professionals we may tend to allow AI to determine what it couldn’t possibly know based on the data we’ve given it to learn from. In addition, disagreeing with the AI may be more work or more risk than just accepting the suggestion it provides.

This is not at all a call to resist or reject AI. It can be extremely helpful, and it is an inevitability in the profession. However, it simply cannot determine the root causes of your professional judgement. I’m asking you today if you’re a clinician to please include human elements in your documentation. Understand that you’re directly or indirectly training the AI tools right now with every note, and that training material to this point is inadequate because we’ve always experienced documentation as a burden.

It’s extra work, and if anyone wanted to document more, AI wouldn’t have the momentum it does in the PT world. I understand that. But if we want AI to be something exceptional, we need to define the connections between interventions and results. We need to include our rationale. We need to explain why that specific patient on that day had the results they did in that moment.

Documentation is no longer just a compliance requirement. You’re teaching the tools that will help Physical Therapists be better for decades. The only way AI can possibly meet its potential in the future is if we nurture it through its infancy with our best efforts right now.