On February 6, the US government issued a memorandum to Medicare insurers, clarifying that artificial intelligence cannot be used as the sole basis for denying claims. The memo comes in response to lawsuits against health insurers such as United Healthcare and Humana, which have been accused of using AI to wrongly deny coverage. Patients claim that the AI model nHPredict has a 90% error rate, highlighting a dangerous aspect of the technology that is gaining increasing attention.
The Centers for Medicare & Medicaid Services expressed concern about the algorithms’ potential to exacerbate discrimination and bias and called on insurers to ensure their models comply with anti-discrimination requirements. Several states, including New York and California, have also warned insurance companies to check the fairness of their algorithms.
The memo emphasizes that while machine learning algorithms can help make decisions, they cannot make decisions on their own. This means that insurers must use human judgment when making claims decisions. It also means that patients who have had their claims denied by an AI algorithm may still have a chance at appeal if they can prove that human intervention was necessary for a proper decision.