Many states have passed laws limiting prior authorization. Physicians hate prior authorization and claim insurers and health plans use it to ration care. I tend to be more sympathetic to prior authorization because in an industry where patients are insulated from the cost of their care, there needs to be some checks and balances over unnecessary care and care that is unnecessarily expensive. I often tell the story about the time my wife unknowingly tried to schedule a CT scan at a hospital outpatient clinic near our house. The hospital told her they would need to seek prior authorization before performing the procedure. The scan was approved by the insurance company and, oh by the way, be prepared to pay $2,700 in cost sharing. After being told about the cost I explained to my wife to never, ever get anything done at a hospital if you can get it done elsewhere. Within about 10 minutes I found a freestanding radiology clinic that would perform the same scan for $403, the negotiated rate with my wife’s insurer. Prior authorization saved my wife $2,300.
Prior authorization sometimes relies on predictive algorithms to approve or deny care. Artificial intelligence (AI), a fancy term for what used to be called algorithms, has been in the news lately. Beside news articles about ways to use AI to improve care, there have been numerous stories about using AI to deny care. A few days ago Kaiser Health News ran the article, “Feds Rein In Use of Predictive Software That Limits Care for Medicare Advantage Patients.” That raises the question: what is wrong with predictive software that limits, directs, or approves care to someone in a health plan?
According to past research, nearly one-third of care is thought to be unnecessary, wasteful or harmful and could be avoided without harming the patient. It has been the goal of every presidential administration (probably since Eisenhower) to use data to better understand care that is beneficial and care that is unnecessary. Health economists, health researchers, and virtually everyone in health care and health policy claims the Holy Grail in health care is to better understand what constitutes appropriate medical care. It’s called outcomes research. However, the journey is not without potholes. This from Kaiser Health News:
Judith Sullivan was recovering from major surgery at a Connecticut nursing home in March when she got surprising news from her Medicare Advantage plan: It would no longer pay for her care because she was well enough to go home.
UnitedHealthcare — the nation’s largest health insurance company, which provides Sullivan’s Medicare Advantage plan — doesn’t have a crystal ball. It does have naviHealth, a care management company bought by UHC’s sister company, Optum, in 2020. Both are part of UnitedHealth Group. NaviHealth analyzes data to help UHC and other insurance companies make coverage decisions.
The software analyses the medical records of millions of patients with similar diagnosis, pre-existing conditions, and other characteristics to predict the care patients need. Software such as this purportedly predicts a date of discharge and cuts off coverage for that service on the predicted date of discharge.
Sullivan’s coverage denial notice and nH Predict report did not mention wound care or her inability to climb stairs. Original Medicare would have most likely covered her continued care, said Samorajczyk.
Sullivan appealed twice and was turned down but ultimately received coverage for an additional 18 days of nursing home care. Critics claim cases like Sullivan’s are not unique.
New federal rules for Medicare Advantage plans beginning in January will rein in their use of algorithms in coverage decisions. Insurance companies using such tools will be expected to “ensure that they are making medical necessity determinations based on the circumstances of the specific individual,” the requirements say, “as opposed to using an algorithm or software that doesn’t account for an individual’s circumstances.”
Should Medicare Advantage plans and other health plans use algorithms to make predictive decisions about care. Of course, that’s been the goal of health policymakers for decades. As with any algorithm, there needs to be analysis to see when and where it goes wrong. In Sullivan’s case the algorithm did not take into account some important variables, such as she had to climb stairs to get in and out of her house, required wound care and servicing a colostomy bag.
The debate about algorithms to access medical care reminds me of the time 30 years ago when Americans turned to managed care to save money. Managed care was designed to ration care and only provide necessary care. Data showed it worked but Americans hated it and turned to lawmakers to limit the ability of HMOs to deny unnecessary care.
My mother is in a nursing home and I have pretty frank discussions with the staff.
I find that the disputes over post-surgery care are a big, big deal for the homes. A lot of this is revenue, of course. Payments to the home runs about $250-$300 per day. This is crucial to the homes, and conversely it is crucial to any health plans trying to make money.
And it is a big deal to the patients and their families. If insurance does not cover rehab care for a disabled senior, the burden can just about crush the health and life spirits of any child who must do the care. I am in a support group for caregivers and I hear this a lot.
So you have a real storm of conflict over this care.