#Single

Regulate use of AI in healthcare sector

There are 1 comments

August 09, 2021 at 5:12 PM

Great Article

Leave a Comment

Your Email address will not be published

Malaysia needs specific guidelines in development and deployment of artificial intelligence in the country for it to take effect at national and grassroots level

“Malaysia needs specific guidelines in development and deployment of artificial intelligence in the country for it to take effect at national and grassroots levels.”
IN line with increasing awareness of the need to self-regulate in the development and deployment of artificial intelligence (AI), the World Health Organisation (WHO) had identified six key principles for use of AI for health in its most recent guidance report titled “Ethics and Governance for Artificial Intelligence for Health”.

Considering the increased development and deployment of AI in Malaysia, especially in the healthcare sector, the report may serve as a valuable vantage point for our own local implementation.

The report notes the need for sector-specific guidelines in a landscape of increasing investment in AI, which is largely under-regulated. This brief note aims to highlight only three aspects of the extensive report, namely:

1) Accountability in AI development, deployment and use;

2) The importance of protecting patient data; and

3) Issues of bias in the development of AI systems.

The issue of accountability is reflected as the fourth key principle in the report. Noting the increase of private entities venturing into the space of healthcare by way of product development or through public-private partnerships, the report suggests that ethical standards should be embedded into such product development.

In addition, it calls for a distribution of accountability among the numerous agents involved in the process, regardless of the issue of explainability in black box AI systems. Such an approach may incentivise a more measured approach, especially by developers.

While the report is silent on a mechanics to distribute accountability, consideration may be given to the English common law Fairchild Principle relating to breach of a duty of care.

The principle states that where there are multiple potential sources of harm, none of which can be proven to have in fact caused any harm, all the sources may be held liable in proportion to the share of which they contributed to the total exposure of risk. Such an approach, while currently limited to a narrow scope of law, may be a stepping stone to effective accountability in AI.

The WHO AI principles are undoubtedly rooted in traditional medical ethics. It is interesting to note, however, that the report recognises shortcomings of the existing medical ethics standards such as the issue of patient confidentiality, which stands as a pillar of medical practice.

The issue of patient data is addressed under the first of the key principles. Historically, patient data has been perceived as an important factor in the development of various fields in medicine. However, with the shift in perspective in treating data as an asset with a valuable monetary value attached to it, its regulation has become critical.

In many jurisdictions, the element of consent in data protection is treated as an automatic carte blanche to use a subject’s personal data under a typically broad scope defined in a loose privacy notice. Such an approach imposes an undue burden on the subject while under-regulating the data user, this being in direct conflict with the principle of patient confidentiality.

The adequacy of the Personal Data Protection Act 2010 may need to be explored to consider a multi-tiered protection to a patient.

The WHO AI report has also shed light on the issue of establishing AI systems that are equitable and inclusive, this being captured in the fifth of its key principles. From the numerous examples cited in the report, two stand out.

The first is the use of AI systems in the development of drugs. The report highlights concerns with bias in training datasets which have been shown to be disproportionate in respect of gender, socioeconomic status, race and accessibility to technology.

As such, where AI systems are used in the development of drugs, such drugs may be appropriate only for a demographic of the data set and not for a more diverse population. In such cases, a drug that is approved may not be effective for the excluded population or may even be harmful to their health and well-being.

The second example cited is the use of AI systems in allocation and prioritisation. This is seen especially in strained healthcare systems where resources are limited.

It has been suggested that machine-learning algorithms could be trained and used to assist in decisions to ration supplies, identify which individuals should receive critical care or when to discontinue certain interventions.

The report recognises the obvious advantage of an assisted decision-making process. However, it also highlights the significant risk in the event of a biased system. As AI systems have been shown to have the ability to amplify pre-existing biases in datasets, consideration must be given to implementing ethical design into prioritisation models.

While the WHO report is a welcome step in enlarging the discourse on the pivotal issue of AI use in healthcare, its proposals can only take effect with adequate implementation at a national and grassroots level.

It correctly notes that in the absence of transparency or enforcement, there is difficulty in gauging compliance with the principles set out in the report or any policy document.


Darmain Segaran is a technology lawyer and researcher in areas of data protection, privacy and AI ethics.
Originally published by Darmain Segaran on thesundaily 9th August 2021
https://www.thesundaily.my/home/regulate-use-of-ai-in-healthcare-sector-GG8181778
SHARE #EarnMoreCoins