Artificial intelligence is outperforming doctors in some areas, showing promise as a medical tool. It’s also giving rise to ethical concerns.
Earlier this year, the headline raced across the internet: a computer had detected skin cancer more accurately than doctors. Suddenly, the future of the medical profession seemed in doubt. Could a computer powered by artificial intelligence make these professionals obsolete?
Not likely, insist researchers, who envision artificial intelligence (AI) as a tool to help, not replace, doctors. Machine learning technology shows great promise in helping diagnose a host of illnesses, but ethical concerns persist.
Artificial Intelligence to Diagnose Melanoma
Dermatology lends itself well to computer-based diagnosis since the profession is so visual. A dermatologist examines a patient’s skin for signs that an imperfection may be cancerous. If the doctor suspects cancer, they order a biopsy to confirm the diagnosis.
Doctors’ initial diagnoses turn out to be correct about 60 percent of the time – up to 80 percent when using modern tools such as a dermascope magnifying device, according to IBM Watson researcher Noel Codella.
In late May, 2018, German researchers announced their artificial intelligence was better than doctors at detecting melanoma. Fifty-eight dermatologists in the study detected 86.6 percent of melanomas, rising to 88.9 percent when they had more information such as age, sex, and lesion location. The deep learning convolutional neural network (CNN) topped the dermatologists by detecting melanoma with 95 percent accuracy.
Researchers trained the AI by showing it real photographs of skin imperfections along with the final diagnosis. This allowed the computer to learn for itself the visual differences between cancer and a benign lesion.
Doctors usually rely on ABCD – asymmetrical, borders uneven, color, and diameter – to determine if a lesion is likely to be cancerous, along with their years of professional experience, training, and intuition. Researchers did not program the computer with ABCD rules. Instead, it learnt much as a child would, identifying for itself why this lesion is cancerous while this other one is not.
Researchers may not know what is inside the so-called black box – the process which allowed the AI to correctly identify skin cancer – but testing the AI against real diagnoses confirms it works. Scientists envision machine learning as a helpful tool for doctors. AI could alert physicians to a potential diagnosis they may have otherwise missed based on some small detail.
Artificial Intelligence to Diagnose Other Medical Conditions
Nail fungus is a common illness dermatologists face in their office each day, affecting 35 million Americans. In January 2018, a team of South Korean researchers demonstrated a convolutional neural network was better at diagnosing this nail condition than the majority of dermatologists.
A dermatologist does still need to confirm the computer’s diagnosis, taking the patient’s medical history and cues such foot odor into consideration. However, the fungus-diagnosing AI could reduce doctor’s office visits, helping patients get a quick telemedicine diagnosis and an antifungal prescription right from the comfort of their own home.
AI is advancing in other areas as well. Google’s Verily software, created using deep learning, predicted heart attack risk almost as well as doctors. Relying on visual cues from eye scans to indicate blood pressure, smoker or non-smoker, and age, all risk factors for heart disease, the AI made correct predictions 70 percent of the time. Doctors used the same cues plus a blood test for 72 percent accuracy.
Concerns About AI in Health
Since machine learning algorithms begin by using data humans provide, it is possible to introduce conscious or unconscious bias. AI systems have had facial recognition errors related to race and gender, and may echo prevalent social biases in their natural language processing based on the quality of training data researchers provide.
Stanford University researchers are experimenting with AI to predict which patients should access palliative care. In a profit-based healthcare setting, though, patient health may not be the only goal.
“What if the algorithm is designed around the goal of saving money?” asks Stanford Center for Biomedical Ethics Director David Magnus. “What if different treatment decisions about patients are made depending on insurance status or their ability to pay?”
“What if the algorithm is designed around the goal of saving money? What if different treatment decisions about patients are made depending on insurance status or their ability to pay?” – David Magnus, Stanford Center for Biomedical Ethics
Researchers have demonstrated just how easy it is to manipulate AI. By making small adjustments to a photo, too tiny for humans to notice, Harvard researchers caused misclassification in 100% of cases. This means patients may not get the treatment they need (saving insurers money), or they may get unnecessary treatment (more money for providers).
Some argue there are other ways to commit fraud, without having to manipulate images, but researcher Andrew Beam points out an advantage: “It would be very difficult to detect that the attack has occurred.” Fraud is not new to the $3.3 trillion U.S. healthcare industry: researchers estimated healthcare fraud at $272 billion one year.
“It would be very difficult to detect that the attack has occurred.” – Andrew Beam, Harvard Medical School
The Future of AI in Healthcare
When it comes to melanoma and cardiac arrest, early detection increases chance of survival greatly, and any technology that saves lives is a boon to healthcare. By the year 2021, Accenture Consulting predicts the medical AI industry will be worth $6.6 billion as computing giants and industry disruptors build “the next big thing.”
The power of AI is its ability to learn with time, as University of Toronto computer scientist Geoffrey Hinton illustrates: “There’s no such system for a human radiologist. If you miss something, and a patient develops cancer five years later, there’s no systematic routine that tells you how to correct yourself. But you could build in a system to teach the computer to achieve exactly that.”
“There’s no such system for a human radiologist. If you miss something, and a patient develops cancer five years later, there’s no systematic routine that tells you how to correct yourself. But you could build in a system to teach the computer to achieve exactly that.” – Geoffrey Hinton, University of Toronto