Have you ever consulted Dr. Google? When I first began researching Internet-based medicine 25 years ago everyone was amazed that something like 100 million people per year were searching the Internet for health information. It is hard to overstate the importance of the Internet to learn more about one’s own health conditions. In the early days doctors hated it. Articles appeared in medical journals lamenting all the misinformation patients would encounter and the waste of doctors’ time discussing or refuting what their patients found.
Looking back these fears seem ludicrous. Respected health care systems, like Mayo Clinic and Cleveland Clinic, sponsor websites that provide basic but useful information about health and medicine. It’s taken for granted that patients will use the Internet and medical journals don’t dismiss health information like was common 25 years ago. Indeed, you can search the medical journals online.
The latest online health phenomenon is ChatGPT, a user-friendly artificial intelligence search engine. A fourth-year ophthalmology resident at Emory University School of Medicine compared the accuracy of ChatGPT to his own ability to diagnose eye complaints. He had noticed that most of his patients had already turned to Google, who often worried that “any number of terrible things could be going on based on the symptoms that they’re experiencing.” He was pleasantly surprised by what he and his colleagues found.
The artificial intelligence engine “is definitely an improvement over just putting something into a Google search bar and seeing what you find,” said co-author Nieraj Jain, an assistant professor at the Emory Eye Center who specializes in vitreoretinal surgery and disease.
But the findings underscore a challenge facing the health care industry as it assesses the promise and pitfalls of generative AI, the type of artificial intelligence used by ChatGPT. The accuracy of chatbot-delivered medical information may represent an improvement over Dr. Google, but there are still many questions about how to integrate this new technology into health care systems with the same safeguards historically applied to the introduction of new drugs or medical devices.
The last sentence is an example of what I encountered years ago. Anything that upsets the status quo is worrisome until authorities figure out a way to harness it.
When it comes to consumer chatbots, though, there is still caution, even though the technology is already widely available — and better than many alternatives. Many doctors believe AI-based medical tools should undergo an approval process similar to the FDA’s regime for drugs, but that would be years away. It’s unclear how such a regime might apply to general-purpose AIs like ChatGPT.
What we need is disruption, not delayed, hesitant integration. The status quo is a barrier to access to care.
“There’s no question we have issues with access to care, and whether or not it is a good idea to deploy ChatGPT to cover the holes or fill the gaps in access, it’s going to happen and it’s happening already,” said Jain. “People have already discovered its utility. So, we need to understand the potential advantages and the pitfalls.”
“We need physicians to start realizing that these new tools are here to stay and they’re offering new capabilities both to physicians and patients,” said James Benoit, an AI consultant. While a postdoctoral fellow in nursing at the University of Alberta in Canada, he published a study in February reporting that ChatGPT significantly outperformed online symptom checkers in evaluating a set of medical scenarios. “They are accurate enough at this point to start meriting some consideration,” he said.
Historically it has been very difficult to disrupt the existing health care model. The primary reasons for this are: 1) regulations that protect the model; 2) third-party payment that blunts attempts to entice consumers to turn to disruptive models.
Third-party payment has its roots in wage price controls during World War II. Shipbuilder Kaiser was granted permission to offer health benefits in lieu of higher wages. Congress later confirmed these benefits were tax free, causing health benefits to take off compared to paying for health care with after-tax dollars. Later Congress passed Medicare and Medicaid that funds medical care for the poor and for seniors. Health care regulations have their origin starting with the Pure Food and Drug Act of 1906 that created the FDA. Also the Flexner Report (1910) lead to the closure of nearly half of medical schools, which Abraham Flexner considered substandard. As a result of these and later regulations, medicine became an elite profession that was more scientific and far more costly.
The entire article provides a recent history of ChatGPT for medicine. ‘Dr. Google’ Meets Its Match: Dr. ChatGPT
Imagine if it were possible to compare AI diagnoses for the 50 most common patient complaints, with physician diagnoses. Then reveal whether AI Dx’s are better or worse than the physicians’ medians. Is it unreasonable to expect “better“?
The following is a bit pejorative, but worth thinking about in this context:
Q- What do people call med school students who graduate in the lower half of their class? A- Doctor.
From what I’ve learned about the peer-reviewed journals, Dr. Google can’t be that much worse. At least I’ve got a kind of feel for what to discount.