Pulse
18 March 2024
In November 2021, SingHealth and SGInnovate inked a partnership which seeks to leverage Artificial Intelligence to accelerate the transfer of ideas from bench to bedside. According to Medicus, a magazine published by Duke-NUS:
The three-year collaboration which will bring together the clinical and research expertise of healthcare professionals with the technological and translational capacities of industrial partners will serve as a hothouse for the creation of ideas and technological solutions that could address unmet clinical needs.
The use of AI in healthcare is a rapidly developing phenomenon in many parts of the world. It is widely recognised that this new and promising technology can assist doctors and nurses in many ways, thereby increasing the efficiency of the delivery of healthcare.
For example, AI can be used to decipher patients’ records. It can scan the records of patients and provide a summary at great speed, enabling doctors to better understand the patients’ conditions and provide appropriate treatment.
AI can also help with triage and determine the order in which patients should be seen by doctors. It can read X-rays and scans and even help with the diagnosis.
Across the globe, healthcare systems are struggling with increasing costs and performance issues. This is often described as a ‘wicked problem’ because it has multiple causes which are not always easy to define and understand, much less solve.
In the face of these challenges, some policymakers and clinical entrepreneurs are optimistic that the solution can be found in Artificial Intelligence and Machine Learning.
This does not suggest that AI and robot doctors will eventually take care of all health care needs. But many are sanguine that the cooperation, as it were, between AI-mediated technology and doctors will go a long way to mitigate some of the pressing issues faced by the healthcare sector at all levels.
Although AI-mediated technologies have greatly enhanced the delivery of healthcare in a variety of ways, several important problems and issues do present themselves which should not be glossed over.
Using AI to interpret data can result in possible slippages that may affect diagnosis and treatment. For example, doctors may use certain terms that the AI programme may interpret in erroneous ways. This is especially so when the differences in the terminology are subtle, such as ‘alcohol abuse’ and ‘alcohol dependence.’
Another problem is that machine learning software often operates as black boxes. This means that doctors are unable to discern exactly how a programme makes its decisions, and have to either accept or reject its recommendations. This raises issues surrounding transparency, explainability and accountability.
Be that as it may, many advocates of the use of AI in healthcare believe that algorithms can make more objective, robust and evidence-based decisions than doctors. This confidence in AI is not unfounded as machine learning methods can take into account a far greater range of data than any group of doctors or healthcare providers.
Some scholars are more cautious in making such conclusions about the objectivity of AI algorithms. For example, in their book Media Technologies, Tarleton Gillespie, Pablo Boczkowski and Kirsten Foot argue that the belief that algorithms are more objective than humans is nothing more than a ‘carefully crafted myth.’
Other scholars have shown that just because the algorithms can recognise patterns does not necessarily mean that these patterns are meaningful or useful. This is especially the case when the pool of data is simply too small for any kind of conclusions derived from it to be significant.
This brings us to the important issue of the collection, interpretation and use of data in AI-assisted healthcare. The fact that the collection, analysis and use of health data – from laboratory tests, medical records, clinical trials, etc – is the bedrock of medical research and the practice of medicine cannot be over-emphasised.
To be sure, over the past few decades, great strides have been made in this area. We now have a huge amount of personal data about individuals that includes genomic sources, radiological images, medical records and even non-health related information.
However, in AI-assisted healthcare, the quality and relevance of the data amassed are just as important as the quantity. In its comprehensive paper entitled Ethics and Governance of Artificial Intelligence for Health (2021), the World Health Organisation (WHO) explains the dangers of poor-quality data in this way:
There is a danger that poor-quality data will be collected for AI training, which may result in models that predict artefacts in the data instead of actual clinical outcomes. There may also be no data, which, with poor-quality data, could distort the performance of an algorithm, resulting in inaccurate performance, or an AI technology might not be available for a specific population because of insufficient data.
In addition to poor-quality data is the problem of data biases which often lurk in datasets and are difficult to detect and eradicate. Because current AI algorithms are greatly dependent on their training data, unaddressed and uncorrected biases can have detrimental and wide-ranging consequences – especially in healthcare delivery.
As Park Seong Ho et al., explain:
Because the datasets used to train AI algorithms for medical diagnosis/prediction are prone to selection biases and may not adequately represent a target population in real-world scenarios for various reasons, this strong dependency on training data is particularly concerning. Clarifying the biases and errors in training data and AI algorithms based on these training data before their implementation is critical, especially given the black-box nature of AI and the fact that cryptic biases and errors can harm numerous patients simultaneously and negatively affect health disparities at a large scale.
Another issue associated with AI-enabled healthcare has to do with decision-making. There is the concern that AI-guided decision-making would displace human judgement and raises new questions about responsibility and accountability.
These issues are important because responsibility and accountability are indispensable in ensuring trust and the protection of human rights whenever there is an adverse outcome in the provision of healthcare. In the case of AI, a clear line of responsibility or accountability is not always easy to establish.
One of the challenges in drawing a clear line of accountability is what has been described as the ‘many hands problem’, which makes the ‘traceability’ of harm difficult if not impossible. This challenge, which is already present in complex healthcare decision-making systems where AI is not used, is made more acute by the presence of AI-mediated technology.
Because the development and deployment of AI involve the contributions of many agents, it is difficult, both legally and morally, to assign responsibility to any particular actor as responsibility is diffused among the different contributors to the technology.
The WHO document mentioned earlier spells out the implications thus:
Diffusion of responsibility may mean that an individual is not compensated for the harm he or she suffers, the harm itself and its cause are not fully detected, the harm is not addressed and societal trust in such technologies may be diminished if it appears that none of the developers or users of such technologies can be held responsible.
Finally, the discussion of AI-assisted healthcare cannot avoid the twin issues of the digital divide and what has been described as ‘data colonialism’.
The digital divide refers to the uneven distribution of and access to the use of technologies among certain distinct groups. This situation prevails whenever advanced technology is used, but with the uptake of AI in healthcare, the gap will only widen further.
According to a website article by Purdue University, data colonialism ‘is the process by which governments, non-governmental organisations and corporations claim ownership of and privatise data that is produced by their users and citizens.’
In the realm of AI-mediated healthcare, however, data colonialism refers to more than the protectionism and privatisation of data.
It has also to do with the question of abject inequity – a situation in which those who provide the data which made the project successful have very little control over the data and are the last to benefit from the research. This especially applies to data collected from people from underrepresented groups.
While the use of AI-mediated technologies in healthcare is to be welcomed for its tremendous promise in enhancing healthcare for the service of the common good, there are several important and complex issues – such as those discussed in this article – that require urgent attention.
These issues cannot be fully and adequately addressed by individual stakeholders and institutions. They require the concerted effort of the entire global community.
Dr Roland Chia is Chew Hock Hin Professor at Trinity Theological College (Singapore) and Theological and Research Advisor of the Ethos Institute for Public Christianity.