Imagine having the ability to take a photo of the inside of a human eye and have a computer tell you if that person is at risk of Alzheimer’s disease or a stroke.
Thanks to recent developments in artificial intelligence (AI), that possibility is on the near horizon.
AI is poised to help healthcare professionals more accurately diagnose disease, determine the right treatments, and ultimately provide better care for patients. But it isn’t magic.
Applying AI — in any field — means we’re training machines to solve problems and make decisions based on data sets. In the context of medicine, it relies on enormous amounts of healthcare data from the general population — you and I likely included.
The potential loss of control over our most sensitive medical information may sound frightening. But the risks to privacy are within our ability to manage, and the potential for AI to save lives is too vast to ignore.
Researchers recently unveiled a revolutionary new method for detecting COVID-19 using AI. The process, developed at the Terasaki Institute for Biomedical Innovation in Southern California, applies an AI model to images of the lungs. The technology can identify symptoms that a human doctor cannot detect on their own.
These developments are just one current example of how AI can change the landscape of medicine.
In another recent study, scientists in France used an AI program to accurately detect lung nodules, identifying malignant lesions up to a year before a radiologist could. And the earlier cancer is detected, the earlier it can be treated, and the better the outcomes.
These results suggest that AI could help doctors screen for lung cancer in the not-too-distant future.
However, AI can do more than see diseases where humans can’t. In the realm of disease, it can help stratify risks, help prevent infection, and detect disease spread throughout the body. Researchers are also starting to apply AI to devise personalized cancer treatments based on a patient’s DNA.
However, empowering algorithms to influence choices about our health comes with some risk, of course. We’ve seen enough corporate data breaches to know how quickly information can be stolen or misused.
Then there’s the fact that poorly designed AI, trained on data that doesn’t accurately reflect a patient population, can replicate humans’ own worst discriminatory behavior.
But we know enough about the risks to alleviate them proactively. For example, we now know we must train AI using data sets that reflect our actual demographics, in all their diversity.
And we must make sure patient data is truly anonymized when it needs to be.
On the other hand, AI cannot work well without a significant volume of data. Collecting the level of data we need for AI to deliver on its promise requires building trust across the healthcare community.
Here’s how we can build that trust.
First, doctors and other medical professionals should remain the final decision-makers at every step of the patient’s journey, from diagnosis with the help of AI to treatment and follow-up based on AI recommendations. AI should inform our choices, not make the final calls.
Second, we should use AI to complement, not replace, the work human healthcare professionals do best. An ideal use case for AI is completing repetitive, abstract medical work, like documentation and data analysis.
Freed from this work, healthcare professionals can get back to the core of practicing medicine: interacting one-on-one with patients, listening, and making empathetic decisions.
Finally, the benefits of AI must be shared widely, not reserved for the privileged. AI should be the guide in advancing equity. We can use AI to identify communities in need of specialized care, then find the best ways of delivering that care outside the walls of a hospital or clinic.
Merely having access to data doesn’t make us smarter. As humans, we’re fully capable of applying the technology we invent in unethical or ill-thought-out ways. But the promise of AI is immense. The task before us now is to apply it well.