Sharona Hoffman - Professor of Health Law and Bioethics, Case Western Reserve University.
The use of artificial intelligence in medicine is generating great excitement and hope for treatment advances.
AI generally refers to computers’ ability to mimic human intelligence and to learn. For example, by using machine learning, scientists are working to develop algorithms that will help them make decisions about cancer treatment. They hope that computers will be able to analyze radiological images and discern which cancerous tumors will respond well to chemotherapy and which will not.
But AI in medicine also raises significant legal and ethical challenges. Several of these are concerns about privacy, discrimination, psychological harm and the physician-patient relationship. In a forthcoming article, I argue that policymakers should establish a number of safeguards around AI, much as they did when genetic testing became commonplace.
Potential for discrimination
AI involves the analysis of very large amounts of data to discern patterns, which are then used to predict the likelihood of future occurrences. In medicine, the data sets can come from electronic health records and health insurance claims but also from several surprising sources. AI can draw upon purchasing records, income data, criminal records and even social media for information about an individual’s health.
Researchers are already using AI to predict a multitude of medical conditions. These include heart disease, stroke, diabetes, cognitive decline, future opioid abuse and even suicide. As one example, Facebook employs an algorithm that makes suicide predictions based on posts with phrases such as “Are you okay?” paired with “Goodbye” and “Please don’t do this.”