Artificial Intelligence (AI) is a misnomer. There is nothing “artificial” about the origins of a system that purports to perform tasks that would typically require human intelligence, such as problem-solving, decision-making, and language comprehension. After all, AI is only as good as the information it synthesizes, which comes from human sources and the derived algorithms, which still have their origins in programming decisions made by humans.
AI should assist clinicians.
AI is the technological equivalent of a child of over-achieving parents. Its capacity for human discernment is limited to the parameters established by those who develop it. Chatbots such as ChatGP are a good example. They can be programmed to generate a lot of data but without clinical experience, differentiating the synthesized facts from generated fiction is almost impossible.
Reservations aside, AI in its transition to adolescence, and can play a number of roles assisting the provision of good and better healthcare. Here are several ways AI could work to further assist clinicians now:
in clinical decision-making by reframing and/or simplifying information provided to a clinician in a manner that is specific to a particular context or consultation. For example, Body Mass Index, a global and crude measure, which never had sound evidence that support it being used to diagnose obesity, could be replaced with a unique measure of “toxic obesity” for an individual, using the numerous databases available to correlate weight, genetics, environment, family history, existing co-morbidities, eating and exercise patterns.
provide regularly updated reference points from the doctor or patient’s relationship network – rather than having several different health organizations provide often contradictory summaries, consensus statements and guidelines. For example, according to Google Scholar, there are already over 2550 consensus statements on hypertension that have been published this year.
provide more evidence for a “no action” default in an electronic medical record (EMR). EMRs are designed to facilitate documentation, often over documentation. There is rarely a capacity to set a no-action default. Unnecessary interventions could be minimized by legitimizing a set of no-action defaults based on AI iteration of interventions that did not lead to positive outcomes. For example, in one study, an automatic stop on antibiotic orders after 48-hour coverage reduced prescribing by 25% with no change in effective outcomes.
help to decrease physical and administrative effort e.g., by recognizing redundant, verbose or non-outcome related content and editing it – try it on a blog!
connect decision-making to benefit, cost or perhaps more importantly, social consequences, across areas where there are a number of conflicting studies. For example, some studies have shown that displaying costs of laboratory tests at the point of ordering reduces the ordering by between 10 and 15%, whereas others have shown no difference.
AI is all about aggregation to assist us in interpreting data and coping with the knowledge explosion in health care. With appropriate algorithms, it can swiftly provide clinicians with the information needed to reach appropriate diagnoses and manageable treatment plans, but only if clinicians critically review AI results and make their own judgments. In its most sophisticated configurations, AI should assist clinicians. If the incentives were right, AI would take only a fraction of the computing knowledge and power now used to manipulate cryptocurrency to make a real difference to our health and our wealth.