Vincent Buscemi and Dan Morris, Partners at Bevan Brittan LLP, discuss the role of ethics in the development and deployment of AI and data driven healthcare solutions
The role of ethics has received too little attention in the rush to digital transformation of health and care systems around the world. As AI and data-driven solutions become increasingly integral to healthcare, the role of ethics in their development and deployment has never been more critical. These technologies promise to revolutionise patient care by offering a dizzying array of new and enhanced services that offer immense potential to improve patient outcomes and streamline care. However, they also pose significant ethical challenges that need careful consideration and, as the late Henry Kissinger et al remarked, while the number of individuals, corporations and governments capable of creating AI has grown exponentially: ‘the ranks of those contemplating this technology’s implications for humanity – social, legal, philosophical, spiritual, moral – remain dangerously thin.’
At conferences, in boardrooms and even within the corridors of power, the priority has been on how to harness the great potential of AI; facilitate rapid adoption; scale up; drive growth; overcome technical challenges; and maximise return on investment. In short, the commercial imperative has been the engine house of innovation as usual.
Nevertheless, as AI becomes increasingly embedded in healthcare, people are starting to see how it affects them as individuals, and indeed as populations. As a result, the calls for regulation, guardrails and standards grow ever louder. But is regulation the panacea that many would suggest it is? Does more regulation really provide the best answer? Arguably, it does not for several important reasons.
The limitations of regulation
Firstly, the regulatory landscape within which AI and data-driven health tech solutions sits is incredibly complex. While regulation can play a crucial role in mitigating risks and guiding the development and deployment of AI, it is not a one-size-fits-all solution and can be an oversimplification. The effectiveness of regulation depends on many factors, including the nature of the regulations, the context in which they are applied, the pace of technological development, and the readiness of society and the economy to adapt.
Developers, operators, adopters and even the regulators themselves do not always fully appreciate precise roles, responsibilities or ascertain that compliance has been achieved. This is hardly surprising when there are so many disparate entities involved. In the UK alone, developers and deployers of AI and data-driven solutions might, depending on the nature of a particular product, need to understand and navigate the rules and requirements of the Information Commissioner (ICO), the Medicines and Healthcare products Regulatory Agency (MHRA), and the Care Quality Commission (CQC). In addition, there are different digital, clinical and patient safety standards to be considered, plus bright line legal obligations such as arise under the EU AI Act, GDPR, DPA and common law rules of confidentiality.
Secondly, many regulatory bodies are struggling with existing burdens, and effective regulation requires not just the creation of rules but also their implementation and enforcement. This can be challenging, especially with complex technologies like AI. There may be difficulties in monitoring compliance, interpreting regulations consistently, and applying penalties where necessary. Recent high profile criticism of organisations such as the Care Quality Commission (CQC) and Nursing and Midwifery Council (NMC) have led to genuine concerns about whether some regulators are fit for purpose. The question arises: do existing regulators have the capacity, the expertise and the adeptness to cope with the novel issues arising from AI and data-driven heath tech. Many reasonable commentators think not?
Thirdly, regulators and regulatory requirements will usually be outpaced by innovation. By the time regulatory standards are put in place and rules are written, codified and disseminated, technological developments will have crossed the finishing line, done a victory lap and be up on the podium collecting medals. This lag can make it difficult for regulations to address emerging risks and challenges, particularly in a global context where regulation involves ethical considerations that are often subjective and culturally dependent – different societies have different views on privacy, security, and the acceptable uses of AI and data, making it difficult to create universally acceptable regulations.
Developing ethical codes
If increased regulation is not necessarily the answer, or at least the best answer, how else do we ensure that AI and data-driven health tech solutions are on the side of the right?
The answer, possibly, lies in a combination of ethics, self-regulation, industry standards, education and awareness. Obviously AI, LLMs and algorithms are incapable of morality; but we humans who are developing and deploying this technology certainly possess this capacity.
But what does this actually mean? What are ethics, whose ethics are we talking about and how are they to be employed?
The answers to these questions are of course beyond the scope of this article. Hundreds of moral philosophers could write yards of library shelves on this subject. However, many of the biggest players in this space are increasingly developing their own codes of ethics for the responsible development and deployment of AI: Microsoft has its Responsible AI Principles, AWS has its Core Dimensions of Responsible AI. How relevant to AI and data-driven healthcare solutions such codes will be remains to be seen.
So what about ethics and the commercial imperative? The interface between ethics and commercial imperatives adds another layer of complexity, as the drive for innovation and profit must be balanced against the need for patient safety, privacy, and fairness. The integration of ethics into AI and data-driven healthcare cannot be separated from the commercial realities of the tech and healthcare industries. Companies are often driven by profitability and shareholder expectations, which can sometimes conflict with ethical imperatives such as patient safety, privacy, and fairness. But the best tech will incorporate such considerations and market forces will jettison those that do not, while investors are almost certainly likely to require it and insist that ethical considerations be embedded into the business models and the development / deployment processes from the start.
If, as societies, we do not start thinking about these issues then we will certainly prove Arthur C. Clarke to be utterly correct. Ever the futurologist, he wrote that ‘As our own species is in the process of proving, one cannot have superior science and inferior morals. The combination is unstable and self-defeating’.
CONTACT INFORMATION