At the recent HLTH 2023 conference, Munjal Shah, CEO of Hippocratic AI, participated in a panel discussion titled “There’s No ‘AI’ in Team.” He spoke about how artificial intelligence could help alleviate worldwide healthcare staffing shortages, a concept he calls “understaffing.”
The annual HLTH event in Las Vegas brings together leaders in healthcare innovation. This year, much discussion centered on leveraging recent advances in generative AI. While cautious about using large language models (LLMs) for diagnosis, Shah sees excellent potential non-diagnostic applications.
Hippocratic AI, which Shah co-founded, aims to address understaffing in areas like chronic care nursing, scheduling, and dietitian services. Munjal Shah cited sobering statistics – a projected global healthcare worker deficit of 10 million by 2030, according to the World Health Organization. He argued this staffing crisis, already straining healthcare systems, represents an ideal starting point for deploying generative AI.
Shah explained on the panel AI could provide some services at a massive scale for a fraction of human costs. For example, chronic care nursing from a human nurse may cost nearly $100 per hour, whereas AI could provide similar services for around $1 per hour without succumbing to burnout. This exponential increase in virtual staff capacity is the crux of “understaffing.”
The panel agreed effectively implementing AI requires a “centaur approach” – combining the power of machines with human expertise and oversight. Munjal Shah said that training AI safely demands partnering with health systems and creating governance frameworks. But many use cases like virtual nursing make sense.
The key is coupling AI with human feedback, known as reinforcement learning. This allows LLMs to learn from mistakes over time rather than repeating them. Shah stated overtraining LLMs on expert health sources is also critical for trustworthy AI.
At Hippocratic AI, thousands of medical professionals test and train the company’s LLMs to ensure suitably expert responses. The goal is conversational AI that can replicate empathetic human interactions in areas like explaining billing, delivering test results, and answering patient questions.
A recent JAMA study showed participants favored ChatGPT’s responses to patient questions over those written by doctors regarding quality and empathy. For Shah, such findings demonstrate the promise of generative AI for patient-facing roles.
By increasing virtual staff capacity exponentially, AI understaffing” could help deliver comprehensive, high-quality care to more patients. As Munjal Shah put it, imagine providing a personal chronic care nurse for all 68 million Americans with multiple chronic conditions or following up with every patient starting new medications.
While not a magic solution for healthcare’s woes, responsible deployment of generative AI could make real dents in key problem areas like staffing shortages. But human expertise remains essential for training safe, trustworthy AI. Combining the best of both gives immense opportunity to improve healthcare access and outcomes.