RM: One of the things we’re excited about is the use of generative AI for ambient documentation. The idea is that a secure app on my phone would listen to my clinical visit and draft my notes for me. This means I don’t have to spend the time to do it myself between clinic visits, or at 8 p.m. at night, or four days later. It also means I can pay full attention to my patient and not the EHR while I’m with them.
We’re about to launch a pilot of 500 physicians and advanced practice providers using ambient documentation at Mass General Brigham. There’s no silver bullet to solve burnout, but we’re hopeful that this will be a game changer for both provider and patient care experiences. The early, anecdotal feedback is that people are loving it. Physicians are able to look at their patients during visits instead of having to type away as they talk. And that’s what it really is about for me, to be able to figure out how we can use technology to deliver care better and to deliver better care.
RM: It’s incredibly important as this technology becomes more widely spread that we understand its limitations and ensure guardrails around its use. Over the summer, we launched an AI Governance Committee at Mass General Brigham to develop a framework for the responsible use of AI.
We want to ensure that concerns around issues such as equity, privacy, transparency, and security are addressed, and that vendors are held responsible for the performance of the technology itself. For instance, it’s important that the ambient documentation works just as well for providers and patients who don’t speak English as their primary language. We also require that our vendors delete the recording used for ambient documentation once we've created the note because we don't want a recording of somebody's voice out there, for any number of reasons.
We want to make sure that these technologies are having a benefit for our patients, for our providers, for our health systems, for society. And so, we're starting in the low-risk space of reducing administrative burdens before we get into more directly impacting patient care.
RM: I have no idea! I hesitate to predict what might happen that far out, but I think in the next 3–5 years, we will drastically change how we deliver care. Compared to 10 years ago, there is exponentially more medical knowledge and information available to healthcare providers today. With large language models and AI in general, we’re able to harness all of that information and make it actionable for an individual provider and patient.
I don’t think it’s coming to practice tomorrow, although plenty of people are hard at work already to develop and test this, but there will be a day when AI will be able to take a patient presenting a constellation of symptoms and help me refine a differential diagnosis and treatment plan. We must be good stewards of the technology as we roll it out, be cognizant of its limitations, and use it in a way that promotes better care — higher-quality, safer, more equitable care.