One can’t speak about the immense potential benefits of AI and IoT in medical devices without giving pause to examine the threats and vulnerabilities posed by their increasing ubiquity. Two particular concerns rise to the forefront; one that sees human bad actors exploiting the machines, and another that finds the machines themselves replicating or even amplifying the worst of human failures.
The Internet of Medical Things
According to one estimate, 50 billion medical devices will be connected to clinical systems within the next decade.1 As the IoMT (Internet of Medical Things) grows to include more and more applications and devices, it will naturally see increased exposure to risk. Bad actors mirror the work ethic of life scientists and R&D researchers, making cybersecurity a key battleground.
Security Concerns of Medical Devices
In 2019 a study from Ben-Gurion University showed how hackers could potentially manipulate CT and MRI results of lung cancer patients, altering key data about their tumours.
According to Med City News “Both radiologists and AI algorithms were unable to differentiate between the altered and correct scans. This kind of tampering has the potential to impact patient lives, and can also result in insurance fraud, ransomware attacks and other issues for both patients and providers.”1
Another study conducted by McAfee found that infusion pumps were also vulnerable to manipulation. Researchers warn that hackers could ‘deliver double doses of medications to patients without detection.’2
It’s not difficult to see the potentially disastrous results of a breach and the vulnerabilities are real.
“Bad actors often need little more than an emulator — which enables one computer system to behave like another – and a piece of code from the system being targeted in order to successfully program AI to hack a device,” reports Med City News.1
Protecting Medical Devices from Hacking
Protections against this include access control layers, anomaly detection, and devices built to combat reverse engineering. It’s easier to implement these safeguards during product development than it is to retrofit them into existing devices, making foresight and vigilance key components of the battle.
With this in mind the University of Minnesota has recently announced a new Center for Medical Device Cybersecurity. The initiative finds the University collaborating with industry partners to form a hub for ‘workforce training, outreach, and discovery to bolster the emerging field.’2
“While manufacturers can ensure a high-level of safety through testing, the security of connected-devices remains a growing and moving target, making this collaboration and the work of the CMDC critical to the industry and all those it serves,” says Technology Leadership Institute director Allison Hubel.2
Potential Bias of AI in Healthcare
The healthcare system is man made and, as such, is subject to bias, both conscious and subconscious. As more and more decisions are made on the basis of data, AI, and machine learning, the hope is that these biases will be eliminated. Without careful planning and oversight, however, we may find machines simply repeating our human mistakes.
Consider the real-world example of an algorithm that was designed to produce scores in order to prioritize patients with the greatest level of healthcare needs.3 It was found that amongst those with equal scores, Black patients were, in fact, sicker than White patients. It was determined that the algorithm had used ‘health costs’ as a proxy for ‘health needs’. Since White patients have traditionally received the highest level of care, their health costs, and thus their scores, were inflated. Rather than eliminate human bias, the algorithm had merely echoed it.
More troubling still is the as-yet-unexplained ability of an AI system to correctly guess a patient’s race based on X-rays and CT scans. Both doctors and computer scientists are at a loss to explain the algorithm’s ability to do so, rendering them unable to address the issue.
“That means that we would not be able to mitigate the bias,” Dr. Judy Gichoya, a co-author of the study and radiologist at Emory University, told Motherboard. “Our main message is that it’s not just the ability to identify self-reported race, it’s the ability of AI to identify self-reported race from very, very trivial features. If these models are starting to learn these properties, then whatever we do in terms of systemic racism … will naturally populate to the algorithm.”
Eliminating Bias in Healthcare
Research and collaboration will be key to preventing bias as AI continues to play a greater role in decision making.
“AI is only as good as the data you train it on,” says Cognoa chief medical officer Dr. Sharief Taraman.5 Taraman believes that development should involve collaboration between data scientists, clinicians, and other stakeholder groups such as patient advocacy groups.
“If we’re very intentional about making sure we include all of those folks, we do it in a way that actually removes the biases and gets rid of them,” he says.5
AI is being adopted by nearly every industry; medicine is no exception. While headlines will often center around its ever-increasing capabilities, those who work to mitigate risk and bias should not be overlooked, as they play a key role in maximizing the benefits we are able to receive from the technology.